# Peeter Joot's (OLD) Blog.

• ## Archives

 Adam C Scott on avoiding gdb signal noise… Ken on Scotiabank iTrade RESP …… Alan Ball on Oops. Fixing a drill hole in P… Peeter Joot's B… on Stokes theorem in Geometric… Exploring Stokes The… on Stokes theorem in Geometric…

• 317,926

# Posts Tagged ‘energy’

## A final pre-exam update of my notes compilation for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti

Posted by peeterjoot on April 22, 2013

Here’s my third update of my notes compilation for this course, including all of the following:

April 21, 2013 Fermi function expansion for thermodynamic quantities

April 20, 2013 Relativistic Fermi Gas

April 10, 2013 Non integral binomial coefficient

April 10, 2013 energy distribution around mean energy

April 09, 2013 Velocity volume element to momentum volume element

April 04, 2013 Phonon modes

April 03, 2013 BEC and phonons

April 03, 2013 Max entropy, fugacity, and Fermi gas

April 02, 2013 Bosons

April 02, 2013 Relativisitic density of states

March 28, 2013 Bosons

plus everything detailed in the description of my previous update and before.

## Fermi-Dirac function expansion for thermodynamic quantities

Posted by peeterjoot on April 21, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

In section 8.1 of [1] are some Fermi-Dirac \index{Fermi-Dirac function} expansions for $P$, $N$, and $U$. Let’s work through these in detail.

Our starting point is the relations

\begin{aligned}P V \beta = \ln Z_{\mathrm{G}} = \sum \ln \left( 1 + z e^{-\beta \epsilon} \right)\end{aligned} \hspace{\stretch{1}}(1.0.1a)

\begin{aligned}N = \sum \frac{1}{{ z^{-1} e^{\beta \epsilon} + 1 }}.\end{aligned} \hspace{\stretch{1}}(1.0.1b)

Recap. Density of states

We’ll employ the 3D non-relativisitic density of states

\begin{aligned}\mathcal{D}(\epsilon) &= \sum_\mathbf{k} \delta(\epsilon - \epsilon_\mathbf{k}) \\ &\sim V \int \frac{d^3 \mathbf{k}}{(2 \pi)^3}\delta(\epsilon - \epsilon_\mathbf{k}) \\ &= \frac{4 \pi V}{(2 \pi)^3}\int dk k^2 \delta\left( \epsilon - \frac{\hbar^2 k^2}{2 m} \right) \\ &= \frac{4 \pi V}{(2 \pi)^3}\int dk k^2 \frac{ \delta\left( k - \sqrt{2 m \epsilon}/\hbar \right)}{ \frac{\hbar^2}{m} \frac{\sqrt{2 m \epsilon}}{\hbar}} \\ &= \frac{2 V}{(2 \pi)^2 }\frac{m}{\hbar^2}\sqrt{\frac{2 m \epsilon}{\hbar^2}},\end{aligned} \hspace{\stretch{1}}(1.0.1b)

or

\begin{aligned}\boxed{\mathcal{D}(\epsilon)=\frac{V}{(2 \pi)^2 }\left( \frac{2 m}{\hbar^2} \right)^{3/2}\epsilon^{1/2}.}\end{aligned} \hspace{\stretch{1}}(1.0.1b)

Density

Now let’s make our integral approximation of the sum for $N$. That is

\begin{aligned}N &= g \int d\epsilon \mathcal{D}(\epsilon) \frac{1}{{ z^{-1} e^{\beta \epsilon} + 1 }} \\ &= g \frac{V}{(2 \pi)^2 }\left( \frac{2 m}{\hbar^2} \right)^{3/2}\int_0^\infty d\epsilon \frac{\epsilon^{1/2}}{ z^{-1} e^{\beta \epsilon} + 1 } \\ &= g \frac{V}{(2 \pi)^2 \beta^{3/2}}\left( \frac{2 m}{\hbar^2} \right)^{3/2}\int_0^\infty du \frac{u^{1/2}}{ z^{-1} e^{u} + 1 } \\ &= g \frac{V}{(2 \pi)^2 \beta^{3/2}}\left( \frac{2 m}{\hbar^2} \right)^{3/2}\Gamma(3/2) f_{3/2}(z) \\ &= g \frac{V}{(2 \pi)^2 \beta^{3/2}}\frac{\left( 2 m k_{\mathrm{B}} T \right)^{3/2}}{\hbar^3}\frac{1}{{2}} \sqrt{\pi}f_{3/2}(z)\\ &= g V \not{{2}} \pi\frac{\left( 2 m k_{\mathrm{B}} T \right)^{3/2}}{h^3}\frac{1}{{\not{{2}}}} \sqrt{\pi}f_{3/2}(z),\end{aligned} \hspace{\stretch{1}}(1.0.1b)

or

\begin{aligned}\frac{N}{V} = g \frac{\left( 2 \pi m k_{\mathrm{B}} T \right)^{3/2}}{h^3}f_{3/2}(z).\end{aligned} \hspace{\stretch{1}}(1.0.5)

With

\begin{aligned}\lambda = \frac{h}{\sqrt{ 2 \pi m k_{\mathrm{B}} T }},\end{aligned} \hspace{\stretch{1}}(1.0.6)

this gives us the desired density result from the text

\begin{aligned}\boxed{\frac{N}{V}=\frac{g}{\lambda^3} f_{3/2}(z).}\end{aligned} \hspace{\stretch{1}}(1.0.7)

Pressure

For the pressure, we can do the same, but have to integrate by parts

\begin{aligned}P V \beta &= g \sum \ln \left( 1 + z e^{-\beta \epsilon} \right) \\ &\sim g \frac{V}{(2 \pi)^2 }\left( \frac{2 m}{\hbar^2} \right)^{3/2}\int_0^\infty d\epsilon \epsilon^{1/2} \ln \left( 1 + z e^{-\beta \epsilon} \right) \\ &= - g \frac{V}{(2 \pi)^2 }\left( \frac{2 m}{\hbar^2} \right)^{3/2}\int_0^\infty d\epsilon \frac{2}{3} \epsilon^{3/2} \frac{-\beta z e^{-\beta \epsilon} }{ 1 + z e^{-\beta \epsilon} } \\ &= g\frac{V}{(2 \pi)^2 }\left( \frac{2 m}{\hbar^2} \right)^{3/2}\frac{2}{3} \frac{1}{{\beta^{3/2}}}\int_0^\infty dx\frac{x^{3/2}}{z^{-1} e^{x} + 1 } \\ &= g\frac{2}{3} 2 \pi V\frac{\left( 2 m k_{\mathrm{B}} T \right)^{3/2}}{h^3 }\Gamma(5/2)f_{5/2}(z) \\ &= g\frac{2}{3} 2 \pi V\frac{\left( 2 m k_{\mathrm{B}} T \right)^{3/2}}{h^3 }\frac{3}{2} \frac{1}{2} \sqrt{\pi}f_{5/2}(z) \\ &= g V\frac{\left( 2 \pi m k_{\mathrm{B}} T \right)^{3/2}}{h^3 }f_{5/2}(z),\end{aligned} \hspace{\stretch{1}}(1.0.7)

or

\begin{aligned}\boxed{P \beta = \frac{g}{\lambda^3} f_{5/2}(z).}\end{aligned} \hspace{\stretch{1}}(1.0.9)

Energy

The average energy is the last thermodynamic quantity to come very easily. We have

\begin{aligned}U &= - \frac{\partial {}}{\partial {\beta}} \ln Z_{\mathrm{G}} \\ &= - \frac{\partial {T}}{\partial {\beta}} \frac{\partial {}}{\partial {T}} \ln Z_{\mathrm{G}} \\ &= - \frac{\partial {(1/k_{\mathrm{B}} T)}}{\partial {\beta}} \frac{\partial {}}{\partial {T}} P V \beta \\ &= \frac{1}{{k_{\mathrm{B}} \beta^2}}\frac{\partial {}}{\partial {T}} \frac{g V}{\lambda^3} f_{5/2}(z) \\ &= g V k_{\mathrm{B}} T^2f_{5/2}(z)\frac{\partial {}}{\partial {T}} \frac{\left( 2 \pi m k_{\mathrm{B}} T \right)^{3/2}}{h^3} \\ &= \frac{3}{2} \frac{g V k_{\mathrm{B}} T}{\lambda^3}f_{5/2}(z).\end{aligned} \hspace{\stretch{1}}(1.0.9)

From eq. 1.0.7, we have

\begin{aligned}\frac{g V}{\lambda^3} = \frac{N}{f_{3/2}(z) },\end{aligned} \hspace{\stretch{1}}(1.0.11)

so the energy takes the form

\begin{aligned}\boxed{U = \frac{3}{2} N k_{\mathrm{B}} T \frac{f_{5/2}(z)}{f_{3/2}(z) }.}\end{aligned} \hspace{\stretch{1}}(1.0.11)

We can compare this to the ratio of pressure to density

\begin{aligned}\frac{P \beta}{n} = \frac{f_{5/2}(z)}{f_{3/2}(z) },\end{aligned} \hspace{\stretch{1}}(1.0.11)

to find

\begin{aligned}U= \frac{3}{2} N k_{\mathrm{B}} T \frac{P V \beta}{N}= \frac{3}{2} P V,\end{aligned} \hspace{\stretch{1}}(1.0.11)

or

\begin{aligned}\boxed{P V = \frac{2}{3} U.}\end{aligned} \hspace{\stretch{1}}(1.0.11)

# References

[1] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

## An updated compilation of notes, for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 27, 2013

Here’s my second update of my notes compilation for this course, including all of the following:

March 27, 2013 Fermi gas

March 26, 2013 Fermi gas thermodynamics

March 26, 2013 Fermi gas thermodynamics

March 23, 2013 Relativisitic generalization of statistical mechanics

March 21, 2013 Kittel Zipper problem

March 18, 2013 Pathria chapter 4 diatomic molecule problem

March 17, 2013 Gibbs sum for a two level system

March 16, 2013 open system variance of N

March 16, 2013 probability forms of entropy

March 14, 2013 Grand Canonical/Fermion-Bosons

March 13, 2013 Quantum anharmonic oscillator

March 12, 2013 Grand canonical ensemble

March 11, 2013 Heat capacity of perturbed harmonic oscillator

March 10, 2013 Langevin small approximation

March 10, 2013 Addition of two one half spins

March 10, 2013 Midterm II reflection

March 07, 2013 Thermodynamic identities

March 06, 2013 Temperature

March 05, 2013 Interacting spin

plus everything detailed in the description of my first update and before.

## PHY452H1S Basic Statistical Mechanics. Lecture 16: Fermi gas. Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 27, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Disclaimer

Peeter’s lecture notes from class. May not be entirely coherent.

# Fermi gas

Review

Continuing a discussion of [1] section 8.1 content.

We found

\begin{aligned}n_{\mathbf{k}} = \frac{1}{{e^{\beta(\epsilon_k - \mu)} + 1}}\end{aligned} \hspace{\stretch{1}}(1.2.1)

With no spin

\begin{aligned}\int n_\mathbf{k} \times \frac{d^3 k}{(2\pi)^3} = \rho\end{aligned} \hspace{\stretch{1}}(1.2.2)

Fig 1.1: Occupancy at low temperature limit

Fig 1.2: Volume integral over momentum up to Fermi energy limit

\begin{aligned}\epsilon_{\mathrm{F}} = \frac{\hbar^2 k_{\mathrm{F}}^2}{2m}\end{aligned} \hspace{\stretch{1}}(1.2.3)

gives

\begin{aligned}k_{\mathrm{F}} = (6 \pi^2 \rho)^{1/3}\end{aligned} \hspace{\stretch{1}}(1.2.4)

\begin{aligned}\sum_\mathbf{k} n_\mathbf{k} = N\end{aligned} \hspace{\stretch{1}}(1.2.5)

\begin{aligned}\mathbf{k} = \frac{2\pi}{L}(n_x, n_y, n_z)\end{aligned} \hspace{\stretch{1}}(1.2.6)

This is for periodic boundary conditions \footnote{I filled in details in the last lecture using a particle in a box, whereas this periodic condition was intended. We see that both achieve the same result}, where

\begin{aligned}\Psi(x + L) = \Psi(x)\end{aligned} \hspace{\stretch{1}}(1.2.7)

Moving on

\begin{aligned}\sum_{k_x} n(\mathbf{k}) = \sum_{p_x} \Delta p_x n(\mathbf{k})\end{aligned} \hspace{\stretch{1}}(1.2.8)

with

\begin{aligned}\Delta k_x = \frac{2 \pi}{L} \Delta p_x\end{aligned} \hspace{\stretch{1}}(1.2.9)

this gives

\begin{aligned}\sum_{k_x} n(\mathbf{k}) = \sum_{n_x} \frac{L}{2\pi} \Delta k_x \rightarrow \frac{L}{2\pi} \int d k_x\end{aligned} \hspace{\stretch{1}}(1.2.10)

Over all dimensions

\begin{aligned}\sum_{\mathbf{k}} n_\mathbf{k} = \left( \frac{L}{2\pi} \right)^3 \left( \int d^3 \mathbf{k} \right)n(\mathbf{k})=N\end{aligned} \hspace{\stretch{1}}(1.2.11)

so that

\begin{aligned}\rho = \int \frac{d^3 \mathbf{k}}{(2 \pi)^3}\end{aligned} \hspace{\stretch{1}}(1.2.12)

Again

\begin{aligned}k_{\mathrm{F}} = (6 \pi^2 \rho)^{1/3}\end{aligned} \hspace{\stretch{1}}(1.2.13)

## Example: Spin considerations

{example:basicStatMechLecture16:1}{

\begin{aligned}\sum_{\mathbf{k}, m_s} = N\end{aligned} \hspace{\stretch{1}}(1.2.14)

\begin{aligned}\sum_{\mathbf{k}, m_s} \frac{1}{{e^{\beta(\epsilon_k - \mu)} + 1}} = (2 S + 1)\left( \int \frac{d^3 \mathbf{k}}{(2 \pi)^3} n(\mathbf{k}) \right)L^3\end{aligned} \hspace{\stretch{1}}(1.2.15)

This gives us

\begin{aligned}k_{\mathrm{F}} = \left( \frac{ 6 \pi^2 \rho }{2 S + 1} \right)^{1/3}\end{aligned} \hspace{\stretch{1}}(1.2.16)

and again

\begin{aligned}\epsilon_{\mathrm{F}} = \frac{\hbar^2 k_{\mathrm{F}}^2}{2m}\end{aligned} \hspace{\stretch{1}}(1.2.17)

}

High Temperatures

Now we want to look at the at higher temperature range, where the occupancy may look like fig. 1.3

Fig 1.3: Occupancy at higher temperatures

\begin{aligned}\mu(T = 0) = \epsilon_{\mathrm{F}}\end{aligned} \hspace{\stretch{1}}(1.2.18)

\begin{aligned}\mu(T \rightarrow \infty) \rightarrow - \infty\end{aligned} \hspace{\stretch{1}}(1.2.19)

so that for large $T$ we have

\begin{aligned}\frac{1}{{e^{\beta(\epsilon_k - \mu)} + 1}} \rightarrow e^{-\beta(\epsilon_k - \mu)}\end{aligned} \hspace{\stretch{1}}(1.2.20)

\begin{aligned}\rho &= \int \frac{d^3 \mathbf{k}}{(2 \pi)^3} e^{\beta \mu} e^{-\beta \epsilon_k} \\ &= e^{\beta \mu} \int \frac{d^3 \mathbf{k}}{(2 \pi)^3} e^{-\beta \epsilon_k} \\ &= e^{\beta \mu} \int dk \frac{4 \pi k^2}{(2 \pi)^3} e^{-\beta \hbar^2 k^2/2m}.\end{aligned} \hspace{\stretch{1}}(1.2.21)

Mathematica (or integration by parts) tells us that

\begin{aligned}\frac{1}{{(2 \pi)^3}} \int 4 \pi^2 k^2 dk e^{-a k^2} = \frac{1}{{(4 \pi a )^{3/2}}},\end{aligned} \hspace{\stretch{1}}(1.2.22)

so we have

\begin{aligned}\rho &= e^{\beta \mu} \left( \frac{2m}{ 4 \pi \beta \hbar^2} \right)^{3/2} \\ &= e^{\beta \mu} \left( \frac{2 m k_{\mathrm{B}} T 4 \pi^2 }{ 4 \pi h^2} \right)^{3/2} \\ &= e^{\beta \mu} \left( \frac{2 m k_{\mathrm{B}} T \pi }{ h^2} \right)^{3/2}\end{aligned} \hspace{\stretch{1}}(1.2.23)

Introducing $\lambda$ for the thermal de Broglie wavelength, $\lambda^3 \sim T^{-3/2}$

\begin{aligned}\lambda \equiv \frac{h}{\sqrt{2 \pi m k_{\mathrm{B}} T}},\end{aligned} \hspace{\stretch{1}}(1.2.24)

we have

\begin{aligned}\rho = e^{\beta \mu} \frac{1}{{\lambda^3}}.\end{aligned} \hspace{\stretch{1}}(1.2.25)

Does it make any sense to have density as a function of temperature? An inappropriately extended to low temperatures plot of the density is found in fig. 1.4 for a few arbitrarily chosen numerical values of the chemical potential $\mu$, where we see that it drops to zero with temperature. I suppose that makes sense if we are not holding volume constant.

Fig 1.4: Density as a function of temperature

We can write

\begin{aligned}\boxed{e^{\beta \mu} = \left( \rho \lambda^3 \right)}\end{aligned} \hspace{\stretch{1}}(1.2.26)

\begin{aligned}\frac{\mu}{k_{\mathrm{B}} T} = \ln \left( \rho \lambda^3 \right)\sim -\frac{3}{2} \ln T\end{aligned} \hspace{\stretch{1}}(1.2.27)

or (taking $\rho$ (and/or volume?) as a constant) we have for large temperatures

\begin{aligned}\mu \propto -T \ln T\end{aligned} \hspace{\stretch{1}}(1.2.28)

The chemical potential is plotted in fig. 1.5, whereas this $- k_{\mathrm{B}} T \ln k_{\mathrm{B}} T$ function is plotted in fig. 1.6. The contributions to $\mu$ from the $k_{\mathrm{B}} T \ln (\rho h^3 (2 \pi m)^{-3/2})$ term are dropped for the high temperature approximation.

Fig 1.5: Chemical potential over degenerate to classical range

Fig 1.6: High temp approximation of chemical potential, extended back to T = 0

Pressure

\begin{aligned}P = - \frac{\partial {E}}{\partial {V}}\end{aligned} \hspace{\stretch{1}}(1.2.29)

For a classical ideal gas as in fig. 1.7 we have

Fig 1.7: Ideal gas pressure vs volume

\begin{aligned}P = \rho k_{\mathrm{B}} T\end{aligned} \hspace{\stretch{1}}(1.2.30)

For a Fermi gas at $T = 0$ we have

\begin{aligned}E &= \sum_\mathbf{k} \epsilon_k n_k \\ &= \sum_\mathbf{k} \epsilon_k \Theta(\mu_0 - \epsilon_k) \\ &= \frac{V}{(2\pi)^3} \int_{\epsilon_k < \mu_0} \frac{\hbar^2 \mathbf{k}^2}{2 m} d^3 \mathbf{k} \\ &= \frac{V}{(2\pi)^3} \int_0^{k_{\mathrm{F}}} \frac{\hbar^2 \mathbf{k}^2}{2 m} d^3 \mathbf{k} \\ &= \frac{V}{(2\pi)^3} \frac{\hbar^2}{2 m} \int_0^{k_{\mathrm{F}}} k^2 4 \pi k^2 d k\propto k_{\mathrm{F}}^5\end{aligned} \hspace{\stretch{1}}(1.2.31)

Specifically,

\begin{aligned}E(T = 0) = V \times \frac{3}{5} \underbrace{\epsilon_{\mathrm{F}}}_{\sim k_{\mathrm{F}}^2}\underbrace{\rho}_{\sim k_{\mathrm{F}}^3}\end{aligned} \hspace{\stretch{1}}(1.2.32)

or

\begin{aligned}\frac{E}{N} = \frac{3}{5} \epsilon_{\mathrm{F}}\end{aligned} \hspace{\stretch{1}}(1.2.33)

\begin{aligned}E = \frac{3}{5} N \frac{\hbar^2}{2 m} \left( 6 \pi^2 \frac{N}{V} \right)^{2/3} = a V^{-2/3},\end{aligned} \hspace{\stretch{1}}(1.2.34)

so that

\begin{aligned}\frac{\partial {E}}{\partial {V}} = -\frac{2}{3} a V^{-5/3}.\end{aligned} \hspace{\stretch{1}}(1.2.35)

\begin{aligned}P &= -\frac{\partial {E}}{\partial {V}} \\ &= \frac{2}{3} \left( a V^{-2/3} \right)V^{-1} \\ &= \frac{2}{3} \frac{E}{V} \\ &= \frac{2}{3} \left( \frac{3}{5} \epsilon_{\mathrm{F}} \rho \right) \\ &= \frac{2}{5} \epsilon_{\mathrm{F}} \rho.\end{aligned} \hspace{\stretch{1}}(1.2.36)

We see that the pressure ends up deviating from the classical result at low temperatures, as sketched in fig. 1.8. This low temperature limit for the pressure $2 \epsilon_{\mathrm{F}} \rho/5$ is called the degeneracy pressure.

Fig 1.8: Fermi degeneracy pressure

# References

[1] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

## PHY452H1S Basic Statistical Mechanics. Lecture 18: Fermi gas thermodynamics. Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 26, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Disclaimer

Peeter’s lecture notes from class. May not be entirely coherent.

Review

Last time we found that the low temperature behaviour or the chemical potential was quadratic as in fig. 1.1.

\begin{aligned}\mu =\mu(0) - a \frac{T^2}{T_{\mathrm{F}}}\end{aligned} \hspace{\stretch{1}}(1.1.1)

Fig 1.1: Fermi gas chemical potential

Specific heat

\begin{aligned}E = \sum_\mathbf{k} n_{\mathrm{F}}(\epsilon_\mathbf{k}, T) \epsilon_\mathbf{k}\end{aligned} \hspace{\stretch{1}}(1.1.2)

\begin{aligned}\frac{E}{V} &= \frac{1}{{(2\pi)^3}} \int d^3 \mathbf{k} n_{\mathrm{F}}(\epsilon_\mathbf{k}, T) \epsilon_\mathbf{k} \\ &= \int d\epsilon N(\epsilon) n_{\mathrm{F}}(\epsilon, T) \epsilon,\end{aligned} \hspace{\stretch{1}}(1.1.3)

where

\begin{aligned}N(\epsilon) = \frac{1}{{4 \pi^2}}\left( \frac{2m}{\hbar^2} \right)^{3/2}\sqrt{\epsilon}.\end{aligned} \hspace{\stretch{1}}(1.1.4)

Low temperature $C_{\mathrm{V}}$

\begin{aligned}\frac{\Delta E(T)}{V}=\int_0^\infty d\epsilon N(\epsilon)\left( n_{\mathrm{F}}(\epsilon, T) - n_{\mathrm{F}}(\epsilon, 0) \right)\end{aligned} \hspace{\stretch{1}}(1.1.5)

The only change in the distribution fig. 1.2, that is of interest is over the step portion of the distribution, and over this range of interest $N(\epsilon)$ is approximately constant as in fig. 1.3.

Fig 1.2: Fermi distribution

Fig 1.3: Fermi gas density of states

\begin{aligned}N(\epsilon) \approx N(\mu)\end{aligned} \hspace{\stretch{1}}(1.0.6a)

\begin{aligned}\mu \approx \epsilon_{\mathrm{F}},\end{aligned} \hspace{\stretch{1}}(1.0.6b)

so that

\begin{aligned}\Delta e \equiv\frac{\Delta E(T)}{V}\approx N(\epsilon_{\mathrm{F}})\int_0^\infty d\epsilon\left( n_{\mathrm{F}}(\epsilon, T) - n_{\mathrm{F}}(\epsilon, 0) \right)=N(\epsilon_{\mathrm{F}})\int_{-\epsilon_{\mathrm{F}}}^\infty d x (\epsilon_{\mathrm{F}} + x)\left( n_{\mathrm{F}}(\epsilon + x, T) - n_{\mathrm{F}}(\epsilon_{\mathrm{F}} + x, 0) \right).\end{aligned} \hspace{\stretch{1}}(1.0.7)

Here we’ve made a change of variables $\epsilon = \epsilon_{\mathrm{F}} + x$, so that we have near cancelation of the $\epsilon_{\mathrm{F}}$ factor

\begin{aligned}\Delta e &= N(\epsilon_{\mathrm{F}})\epsilon_{\mathrm{F}}\int_{-\epsilon_{\mathrm{F}}}^\infty d x \underbrace{\left( n_{\mathrm{F}}(\epsilon + x, T) - n_{\mathrm{F}}(\epsilon_{\mathrm{F}} + x, 0) \right)}_{\text{almost equal everywhere}}+N(\epsilon_{\mathrm{F}})\int_{-\epsilon_{\mathrm{F}}}^\infty d x x\left( n_{\mathrm{F}}(\epsilon + x, T) - n_{\mathrm{F}}(\epsilon_{\mathrm{F}} + x, 0) \right) \\ &\approx N(\epsilon_{\mathrm{F}})\int_{-\infty}^\infty d x x\left( \frac{1}{{ e^{\beta x} +1 }} - {\left.{{\frac{1}{{ e^{\beta x} +1 }}}}\right\vert}_{{T \rightarrow 0}} \right).\end{aligned} \hspace{\stretch{1}}(1.0.8)

Here we’ve extended the integration range to $-\infty$ since this doesn’t change much. FIXME: justify this to myself? Taking derivatives with respect to temperature we have

\begin{aligned}\frac{\delta e}{T} &= -N(\epsilon_{\mathrm{F}})\int_{-\infty}^\infty d x x\frac{1}{{(e^{\beta x} + 1)^2}}\frac{d}{dT} e^{\beta x} \\ &= N(\epsilon_{\mathrm{F}})\int_{-\infty}^\infty d x x\frac{1}{{(e^{\beta x} + 1)^2}}e^{\beta x}\frac{x}{k_{\mathrm{B}} T^2}\end{aligned} \hspace{\stretch{1}}(1.0.9)

With $\beta x = y$, we have for $T \ll T_{\mathrm{F}}$

\begin{aligned}\frac{C}{V} &= N(\epsilon_{\mathrm{F}})\int_{-\infty}^\infty \frac{ dy y^2 e^y }{ (e^y + 1)^2 k_{\mathrm{B}} T^2} (k_{\mathrm{B}} T)^3 \\ &= N(\epsilon_{\mathrm{F}}) k_{\mathrm{B}}^2 T\underbrace{\int_{-\infty}^\infty \frac{ dy y^2 e^y }{ (e^y + 1)^2 } }_{\pi^2/3} \\ &= \frac{\pi^2}{3} N(\epsilon_{\mathrm{F}}) k_{\mathrm{B}} (k_{\mathrm{B}} T).\end{aligned} \hspace{\stretch{1}}(1.0.10)

Using eq. 1.1.4 at the Fermi energy and

\begin{aligned}\frac{N}{V} = \rho\end{aligned} \hspace{\stretch{1}}(1.0.11a)

\begin{aligned}\epsilon_{\mathrm{F}} = \frac{\hbar^2 k_{\mathrm{F}}^2}{2 m}\end{aligned} \hspace{\stretch{1}}(1.0.11b)

\begin{aligned}k_{\mathrm{F}} = \left( 6 \pi^2 \rho \right)^{1/3},\end{aligned} \hspace{\stretch{1}}(1.0.11c)

we have

\begin{aligned}N(\epsilon_{\mathrm{F}}) &= \frac{1}{{4 \pi^2}}\left( \frac{2m}{\hbar^2} \right)^{3/2}\sqrt{\epsilon_{\mathrm{F}}} \\ &= \frac{1}{{4 \pi^2}}\left( \frac{2m}{\hbar^2} \right)^{3/2}\frac{\hbar k_{\mathrm{F}}}{\sqrt{2m}} \\ &= \frac{1}{{4 \pi^2}}\left( \frac{2m}{\hbar^2} \right)^{3/2}\frac{\hbar }{\sqrt{2m}} \left( 6 \pi^2 \rho \right)^{1/3} \\ &= \frac{1}{{4 \pi^2}}\left( \frac{2m}{\hbar^2} \right)\left( 6 \pi^2 \frac{N}{V} \right)^{1/3}\end{aligned} \hspace{\stretch{1}}(1.0.12)

Giving

\begin{aligned}\frac{C}{N} &= \frac{\pi^2}{3} \frac{V}{N}\frac{1}{{4 \pi^2}}\left( \frac{2m}{\hbar^2} \right)\left( 6 \pi^2 \frac{N}{V} \right)^{1/3}k_{\mathrm{B}} (k_{\mathrm{B}} T) \\ &= \left( \frac{m}{6 \hbar^2} \right)\left( \frac{V}{N} \right)^{2/3}\left( 6 \pi^2 \right)^{1/3}k_{\mathrm{B}} (k_{\mathrm{B}} T) \\ &= \left( \frac{ \pi^2 m}{3 \hbar^2} \right)\left( \frac{V}{\pi^2 N} \right)^{2/3}k_{\mathrm{B}} (k_{\mathrm{B}} T) \\ &= \left( \frac{ \pi^2 m}{\hbar^2} \right)\frac{\hbar^2}{2 m \epsilon_{\mathrm{F}}}k_{\mathrm{B}} (k_{\mathrm{B}} T),\end{aligned} \hspace{\stretch{1}}(1.0.13)

or

\begin{aligned}\boxed{\frac{C}{N} = \frac{\pi^2}{2} k_{\mathrm{B}} \frac{ k_{\mathrm{B}} T}{\epsilon_{\mathrm{F}}}.}\end{aligned} \hspace{\stretch{1}}(1.0.14)

This is illustrated in fig. 1.4.

Fig 1.4: Specific heat per Fermion

Relativisitic gas

1. Relativisitic gas

\begin{aligned}\epsilon_\mathbf{k} = \pm \hbar v \left\lvert {\mathbf{k}} \right\rvert.\end{aligned} \hspace{\stretch{1}}(1.0.15)

\begin{aligned}\epsilon = \sqrt{(m_0 c^2)^2 + c^2 (\hbar \mathbf{k})^2}\end{aligned} \hspace{\stretch{1}}(1.0.16)

2. graphene
3. massless Dirac Fermion

Fig 1.5: Relativisitic gas energy distribution

We can think of this state distribution in a condensed matter view, where we can have a hole to electron state transition by supplying energy to the system (i.e. shining light on the substrate). This can also be thought of in a relativisitic particle view where the same state transition can be thought of as a positron electron pair transition. A round trip transition will have to supply energy like $2 m_0 c^2$ as illustrated in fig. 1.6.

Fig 1.6: Hole to electron round trip transition energy requirement

Graphene

Consider graphene, a 2D system. We want to determine the density of states $N(\epsilon)$,

\begin{aligned}\int \frac{d^2 \mathbf{k}}{(2 \pi)^2} \rightarrow \int_{-\infty}^\infty d\epsilon N(\epsilon),\end{aligned} \hspace{\stretch{1}}(1.0.17)

We’ll find a density of states distribution like fig. 1.7.

Fig 1.7: Density of states for 2D linear energy momentum distribution

\begin{aligned}N(\epsilon) = \text{constant factor} \frac{\left\lvert {\epsilon} \right\rvert}{v},\end{aligned} \hspace{\stretch{1}}(1.0.18)

\begin{aligned}C \sim \frac{d}{dT} \int N(\epsilon) n_{\mathrm{F}}(\epsilon) \epsilon d\epsilon,\end{aligned} \hspace{\stretch{1}}(1.0.19)

\begin{aligned}\Delta E \sim \underbrace{T}_{\text{window}}\times\underbrace{T}_{\text{energy}}\times\underbrace{T}_{\text{number of states}}\sim T^3\end{aligned} \hspace{\stretch{1}}(1.0.20)

so that

\begin{aligned}C_{\mathrm{Dimensionless}} \sim T^2\end{aligned} \hspace{\stretch{1}}(1.0.21)

## Relativistic generalization of statistical mechanics

Posted by peeterjoot on March 22, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Motivation

I was wondering how to generalize the arguments of [1] to relativistic systems. Here’s a bit of blundering through the non-relativistic arguments of that text, tweaking them slightly.

I’m sure this has all been done before, but was a useful exercise to understand the non-relativistic arguments of Pathria better.

# Generalizing from energy to four momentum

Generalizing the arguments of section 1.1.

Instead of considering that the total energy of the system is fixed, it makes sense that we’d have to instead consider the total four-momentum of the system fixed, so if we have $N$ particles, we have a total four momentum

\begin{aligned}P = \sum_i n_i P_i = \sum n_i \left( \epsilon_i/c, \mathbf{p}_i \right),\end{aligned} \hspace{\stretch{1}}(1.2.1)

where $n_i$ is the total number of particles with four momentum $P_i$. We can probably expect that the $n_i$‘s in this relativistic system will be smaller than those in a non-relativistic system since we have many more states when considering that we can have both specific energies and specific momentum, and the combinatorics of those extra degrees of freedom. However, we’ll still have

\begin{aligned}N = \sum_i n_i.\end{aligned} \hspace{\stretch{1}}(1.2.2)

Only given a specific observer frame can these these four-momentum components $\left( \epsilon_i/c, \mathbf{p}_i \right)$ be expressed explicitly, as in

\begin{aligned}\epsilon_i = \gamma_i m_i c^2\end{aligned} \hspace{\stretch{1}}(1.0.3a)

\begin{aligned}\mathbf{p}_i = \gamma_i m \mathbf{v}_i\end{aligned} \hspace{\stretch{1}}(1.0.3b)

\begin{aligned}\gamma_i = \frac{1}{{\sqrt{1 - \mathbf{v}_i^2/c^2}}},\end{aligned} \hspace{\stretch{1}}(1.0.3c)

where $\mathbf{v}_i$ is the velocity of the particle in that observer frame.

# Generalizing the number if microstates, and notion of thermodynamic equilibrium

Generalizing the arguments of section 1.2.

We can still count the number of all possible microstates, but that number, denoted $\Omega(N, V, E)$, for a given total energy needs to be parameterized differently. First off, any given volume is observer dependent, so we likely need to map

\begin{aligned}V \rightarrow \int d^4 x = \int dx^0 \wedge dx^1 \wedge dx^2 \wedge dx^3.\end{aligned} \hspace{\stretch{1}}(1.0.4)

Let’s still call this $V$, but know that we mean this to be four volume element, bounded in both space and time, referred to a fixed observer’s frame. So, lets write the total number of microstates as

\begin{aligned}\Omega(N, V, P) = \Omega \left( N, \int d^4 x, E/c, P^1, P^2, P^3 \right),\end{aligned} \hspace{\stretch{1}}(1.0.5)

where $P = ( E/c, \mathbf{P} )$ is the total four momentum of the system. If we have a system subdivided into to two systems in contact as in fig. 1.1, where the two systems have total four momentum $P_1$ and $P_2$ respectively.

Fig 1.1: Two physical systems in thermal contact

In the text the total energy of both systems was written

\begin{aligned}E^{(0)} = E_1 + E_2,\end{aligned} \hspace{\stretch{1}}(1.0.6)

so we’ll write

\begin{aligned}{P^{(0)}}^\mu = P_1^\mu + P_2^\mu = \text{constant},\end{aligned} \hspace{\stretch{1}}(1.0.7)

so that the total number of microstates of the combined system is now

\begin{aligned}\Omega^{(0)}(P_1, P_2) = \Omega_1(P_1) \Omega_2(P_2).\end{aligned} \hspace{\stretch{1}}(1.0.8)

As before, if $\bar{{P}}^\mu_i$ denotes an equilibrium value of $P_i^\mu$, then maximizing eq. 1.0.8 requires all the derivatives (no sum over $\mu$ here)

\begin{aligned}\left({\partial {\Omega_1(P_1)}}/{\partial {P^\mu_1}}\right)_{{P_1 = \bar{{P_1}}}}\Omega_2(\bar{{P}}_2)+\Omega_1(\bar{{P}}_1)\left({\partial {\Omega_2(P_2)}}/{\partial {P^\mu}}\right)_{{P_2 = \bar{{P_2}}}}\times\frac{\partial {P_2^\mu}}{\partial {P_1^\mu}}= 0.\end{aligned} \hspace{\stretch{1}}(1.0.9)

With each of the components of the total four-momentum $P^\mu_1 + P^\mu_2$ separately constant, we have ${\partial {P_2^\mu}}/{\partial {P_1^\mu}} = -1$, so that we have

\begin{aligned}\left({\partial {\ln \Omega_1(P_1)}}/{\partial {P^\mu_1}}\right)_{{P_1 = \bar{{P_1}}}}=\left({\partial {\ln \Omega_2(P_2)}}/{\partial {P^\mu}}\right)_{{P_2 = \bar{{P_2}}}},\end{aligned} \hspace{\stretch{1}}(1.0.10)

as before. However, we now have one such identity for each component of the total four momentum $P$ which has been held constant. Let’s now define

\begin{aligned}\beta_\mu \equiv \left({\partial {\ln \Omega(N, V, P)}}/{\partial {P^\mu}}\right)_{{N, V, P = \bar{{P}}}},\end{aligned} \hspace{\stretch{1}}(1.0.11)

Our old scalar temperature is then

\begin{aligned}\beta_0 = c \left({\partial {\ln \Omega(N, V, P)}}/{\partial {E}}\right)_{{N, V, P = \bar{{P}}}} = c \beta = \frac{c}{k_{\mathrm{B}} T},\end{aligned} \hspace{\stretch{1}}(1.0.12)

but now we have three additional such constants to figure out what to do with. A first start would be figuring out how the Boltzmann probabilities should be generalized.

Equilibrium between a system and a heat reservoir

Generalizing the arguments of section 3.1.

As in the text, let’s consider a very large heat reservoir $A'$ and a subsystem $A$ as in fig. 1.2 that has come to a state of mutual equilibrium. This likely needs to be defined as a state in which the four vector $\beta_\mu$ is common, as opposed to just $\beta_0$ the temperature field being common.

Fig 1.2: A system A immersed in heat reservoir A’

If the four momentum of the heat reservoir is $P_r'$ with $P_r$ for the subsystem, and

\begin{aligned}P_r + P_r' = P^{(0)} = \text{constant}.\end{aligned} \hspace{\stretch{1}}(1.0.13)

Writing

\begin{aligned}\Omega'({P^\mu_r}') = \Omega'(P^{(0)} - {P^\mu_r}) \propto P_r,\end{aligned} \hspace{\stretch{1}}(1.0.14)

for the number of microstates in the reservoir, so that a Taylor expansion of the logarithm around $P_r' = P^{(0)}$ (with sums implied) is

\begin{aligned}\ln \Omega'({P^\mu_r}') = \ln \Omega'({P^{(0)}}) +\left({\partial {\ln \Omega'}}/{\partial {{P^\mu}'}}\right)_{{P' = P^{(0)}}} \left( P^{(0)} - P^\mu \right)\approx\text{constant} - \beta_\mu' P^\mu.\end{aligned} \hspace{\stretch{1}}(1.0.15)

Here we’ve inserted the definition of $\beta^\mu$ from eq. 1.0.11, so that at equilibrium, with $\beta_\mu' = \beta_\mu$, we obtain

\begin{aligned}\Omega'({P^\mu_r}') = \exp\left( - \beta_\mu P^\mu \right)=\exp\left( - \beta E \right)\exp\left( - \beta_1 P^1 \right)\exp\left( - \beta_2 P^3 \right)\exp\left( - \beta_3 P^3 \right).\end{aligned} \hspace{\stretch{1}}(1.0.16)

# Next steps

This looks consistent with the outline provided in http://physics.stackexchange.com/a/4950/3621 by Lubos to the stackexchange “is there a relativistic quantum thermodynamics” question. I’m sure it wouldn’t be too hard to find references that explore this, as well as explain why non-relativistic stat mech can be used for photon problems. Further exploration of this should wait until after the studies for this course are done.

# References

[1] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

## Thermodynamic identities

Posted by peeterjoot on March 7, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Impressed with the clarity of Baez’s entropic force discussion on differential forms [1], let’s use that methodology to find all the possible identities that we can get from the thermodynamic identity (for now assuming $N$ is fixed, ignoring the chemical potential.)

This isn’t actually that much work to do, since a bit of editor regular expression magic can do most of the work.

Our starting point is the thermodynamic identity

\begin{aligned}dU = d Q + d W = T dS - P dV,\end{aligned} \hspace{\stretch{1}}(1.0.1)

or

\begin{aligned}0 = dU - T dS + P dV.\end{aligned} \hspace{\stretch{1}}(1.0.2)

It’s quite likely that many of the identities that can be obtained will be useful, but this should at least provide a handy reference of possible conversions.

Differentials in $P, V$

This first case illustrates the method.

\begin{aligned}0 &= dU - T dS + P dV \\ &= \left( \frac{\partial {U}}{\partial {P}} \right)_{V} dP +\left( \frac{\partial {U}}{\partial {V}} \right)_{P} dV- T\left( \left( \frac{\partial {S}}{\partial {P}} \right)_{V} dP + \left( \frac{\partial {S}}{\partial {V}} \right)_{P} dV \right)+ P dV \\ &= dP \left( \left( \frac{\partial {U}}{\partial {P}} \right)_{V} - T \left( \frac{\partial {S}}{\partial {P}} \right)_{V} \right)+dV \left( \left( \frac{\partial {U}}{\partial {V}} \right)_{P} - T \left( \frac{\partial {S}}{\partial {V}} \right)_{P} + P \right).\end{aligned} \hspace{\stretch{1}}(1.0.3)

Taking wedge products with $dV$ and $dP$ respectively, we form two two forms

\begin{aligned}0 = dP \wedge dV \left( \left( \frac{\partial {U}}{\partial {P}} \right)_{V} - T \left( \frac{\partial {S}}{\partial {P}} \right)_{V} \right)\end{aligned} \hspace{\stretch{1}}(1.0.4a)

\begin{aligned}0 = dV \wedge dP \left( \left( \frac{\partial {U}}{\partial {V}} \right)_{P} - T \left( \frac{\partial {S}}{\partial {V}} \right)_{P} + P \right).\end{aligned} \hspace{\stretch{1}}(1.0.4b)

Since these must both be zero we find

\begin{aligned}\left( \frac{\partial {U}}{\partial {P}} \right)_{V} = T \left( \frac{\partial {S}}{\partial {P}} \right)_{V}\end{aligned} \hspace{\stretch{1}}(1.0.5a)

\begin{aligned}P =-\left( \frac{\partial {U}}{\partial {V}} \right)_{P}- T \left( \frac{\partial {S}}{\partial {V}} \right)_{P}.\end{aligned} \hspace{\stretch{1}}(1.0.5b)

Differentials in $P, T$

\begin{aligned}0 &= dU - T dS + P dV \\ &= \left( \frac{\partial {U}}{\partial {P}} \right)_{T} dP + \left( \frac{\partial {U}}{\partial {T}} \right)_{P} dT-T \left( \left( \frac{\partial {S}}{\partial {P}} \right)_{T} dP + \left( \frac{\partial {S}}{\partial {T}} \right)_{P} dT \right)+\left( \frac{\partial {V}}{\partial {P}} \right)_{T} dP + \left( \frac{\partial {V}}{\partial {T}} \right)_{P} dT,\end{aligned} \hspace{\stretch{1}}(1.0.6)

or

\begin{aligned}0 = \left( \frac{\partial {U}}{\partial {P}} \right)_{T} -T \left( \frac{\partial {S}}{\partial {P}} \right)_{T} + \left( \frac{\partial {V}}{\partial {P}} \right)_{T}\end{aligned} \hspace{\stretch{1}}(1.0.7a)

\begin{aligned}0 = \left( \frac{\partial {U}}{\partial {T}} \right)_{P} -T \left( \frac{\partial {S}}{\partial {T}} \right)_{P} + \left( \frac{\partial {V}}{\partial {T}} \right)_{P}.\end{aligned} \hspace{\stretch{1}}(1.0.7b)

Differentials in $P, S$

\begin{aligned}0 &= dU - T dS + P dV \\ &= \left( \frac{\partial {U}}{\partial {P}} \right)_{S} dP + \left( \frac{\partial {U}}{\partial {S}} \right)_{P} dS- T dS+ P \left( \left( \frac{\partial {V}}{\partial {P}} \right)_{S} dP + \left( \frac{\partial {V}}{\partial {S}} \right)_{P} dS \right),\end{aligned} \hspace{\stretch{1}}(1.0.8)

or

\begin{aligned}\left( \frac{\partial {U}}{\partial {P}} \right)_{S} = -P \left( \frac{\partial {V}}{\partial {P}} \right)_{S}\end{aligned} \hspace{\stretch{1}}(1.0.9a)

\begin{aligned}T = \left( \frac{\partial {U}}{\partial {S}} \right)_{P} + P \left( \frac{\partial {V}}{\partial {S}} \right)_{P}.\end{aligned} \hspace{\stretch{1}}(1.0.9b)

Differentials in $P, U$

\begin{aligned}0 &= dU - T dS + P dV \\ &= dU - T \left( \left( \frac{\partial {S}}{\partial {P}} \right)_{U} dP + \left( \frac{\partial {S}}{\partial {U}} \right)_{P} dU \right)+ P\left( \left( \frac{\partial {V}}{\partial {P}} \right)_{U} dP + \left( \frac{\partial {V}}{\partial {U}} \right)_{P} dU \right),\end{aligned} \hspace{\stretch{1}}(1.0.10)

or

\begin{aligned}0 = 1 - T \left( \frac{\partial {S}}{\partial {U}} \right)_{P} + P \left( \frac{\partial {V}}{\partial {U}} \right)_{P} \end{aligned} \hspace{\stretch{1}}(1.0.11a)

\begin{aligned}T \left( \frac{\partial {S}}{\partial {P}} \right)_{U} = P \left( \frac{\partial {V}}{\partial {P}} \right)_{U}.\end{aligned} \hspace{\stretch{1}}(1.0.11b)

Differentials in $V, T$

\begin{aligned}0 &= dU - T dS + P dV \\ &= \left( \frac{\partial {U}}{\partial {V}} \right)_{T} dV + \left( \frac{\partial {U}}{\partial {T}} \right)_{V} dT - T \left( \left( \frac{\partial {S}}{\partial {V}} \right)_{T} dV + \left( \frac{\partial {S}}{\partial {T}} \right)_{V} dT \right)+ P dV,\end{aligned} \hspace{\stretch{1}}(1.0.12)

or

\begin{aligned}0 = \left( \frac{\partial {U}}{\partial {V}} \right)_{T} - T \left( \frac{\partial {S}}{\partial {V}} \right)_{T} + P \end{aligned} \hspace{\stretch{1}}(1.0.13a)

\begin{aligned}\left( \frac{\partial {U}}{\partial {T}} \right)_{V} = T \left( \frac{\partial {S}}{\partial {T}} \right)_{V}.\end{aligned} \hspace{\stretch{1}}(1.0.13b)

Differentials in $V, S$

\begin{aligned}0 &= dU - T dS + P dV \\ &= \left( \frac{\partial {U}}{\partial {V}} \right)_{S} dV + \left( \frac{\partial {U}}{\partial {S}} \right)_{V} dS - T dS+ P dV,\end{aligned} \hspace{\stretch{1}}(1.0.14)

or

\begin{aligned}P = -\left( \frac{\partial {U}}{\partial {V}} \right)_{S}\end{aligned} \hspace{\stretch{1}}(1.0.15a)

\begin{aligned}T = \left( \frac{\partial {U}}{\partial {S}} \right)_{V} .\end{aligned} \hspace{\stretch{1}}(1.0.15b)

Differentials in $V, U$

\begin{aligned}0 &= dU - T dS + P dV \\ &= dU- T \left( \left( \frac{\partial {S}}{\partial {V}} \right)_{U} dV + \left( \frac{\partial {S}}{\partial {U}} \right)_{V} dU \right)+ P \left( \left( \frac{\partial {V}}{\partial {V}} \right)_{U} dV + \left( \frac{\partial {V}}{\partial {U}} \right)_{V} dU \right)\end{aligned} \hspace{\stretch{1}}(1.0.16)

or

\begin{aligned}0 = 1 - T \left( \frac{\partial {S}}{\partial {U}} \right)_{V} + P \left( \frac{\partial {V}}{\partial {U}} \right)_{V} \end{aligned} \hspace{\stretch{1}}(1.0.17a)

\begin{aligned}T \left( \frac{\partial {S}}{\partial {V}} \right)_{U} = P \left( \frac{\partial {V}}{\partial {V}} \right)_{U}.\end{aligned} \hspace{\stretch{1}}(1.0.17b)

Differentials in $S, T$

\begin{aligned}0 &= dU - T dS + P dV \\ &= \left( \left( \frac{\partial {U}}{\partial {S}} \right)_{T} dS + \left( \frac{\partial {U}}{\partial {T}} \right)_{S} dT \right)- T dS+ P \left( \left( \frac{\partial {V}}{\partial {S}} \right)_{T} dS + \left( \frac{\partial {V}}{\partial {T}} \right)_{S} dT \right),\end{aligned} \hspace{\stretch{1}}(1.0.18)

or

\begin{aligned}0 = \left( \frac{\partial {U}}{\partial {S}} \right)_{T} - T + P \left( \frac{\partial {V}}{\partial {S}} \right)_{T} \end{aligned} \hspace{\stretch{1}}(1.0.19a)

\begin{aligned}0 = \left( \frac{\partial {U}}{\partial {T}} \right)_{S} + P \left( \frac{\partial {V}}{\partial {T}} \right)_{S}.\end{aligned} \hspace{\stretch{1}}(1.0.19b)

Differentials in $S, U$

\begin{aligned}0 &= dU - T dS + P dV \\ &= dU - T dS+ P \left( \left( \frac{\partial {V}}{\partial {S}} \right)_{U} dS + \left( \frac{\partial {V}}{\partial {U}} \right)_{S} dU \right)\end{aligned} \hspace{\stretch{1}}(1.0.20)

or

\begin{aligned}\frac{1}{{P}} = - \left( \frac{\partial {V}}{\partial {U}} \right)_{S} \end{aligned} \hspace{\stretch{1}}(1.0.21a)

\begin{aligned}T = P \left( \frac{\partial {V}}{\partial {S}} \right)_{U}.\end{aligned} \hspace{\stretch{1}}(1.0.21b)

Differentials in $T, U$

\begin{aligned}0 &= dU - T dS + P dV \\ &= dU - T \left( \left( \frac{\partial {S}}{\partial {T}} \right)_{U} dT + \left( \frac{\partial {S}}{\partial {U}} \right)_{T} dU \right)+ P\left( \left( \frac{\partial {V}}{\partial {T}} \right)_{U} dT + \left( \frac{\partial {V}}{\partial {U}} \right)_{T} dU \right),\end{aligned} \hspace{\stretch{1}}(1.0.22)

or

\begin{aligned}0 = 1 - T \left( \frac{\partial {S}}{\partial {U}} \right)_{T} + P \left( \frac{\partial {V}}{\partial {U}} \right)_{T} \end{aligned} \hspace{\stretch{1}}(1.0.23a)

\begin{aligned}T \left( \frac{\partial {S}}{\partial {T}} \right)_{U} = P \left( \frac{\partial {V}}{\partial {T}} \right)_{U}.\end{aligned} \hspace{\stretch{1}}(1.0.23b)

# References

[1] John Baez. Entropic forces, 2012. URL http://johncarlosbaez.wordpress.com/2012/02/01/entropic-forces/. [Online; accessed 07-March-2013].

## PHY452H1S Basic Statistical Mechanics. Problem Set 4: Ideal gas

Posted by peeterjoot on March 3, 2013

# Disclaimer

## Question: Sackur-Tetrode entropy of an Ideal Gas

The entropy of an ideal gas is given by

\begin{aligned}S = N k_{\mathrm{B}}\left( \ln \left( \frac{V}{N} \left( \frac{4 \pi m E}{3 N h^2} \right) ^{3/2} \right) + \frac{5}{2} \right)\end{aligned} \hspace{\stretch{1}}(1.1.1)

Find the temperature of this gas via $(\partial S/ \partial E)_{V,N} = 1/T$. Find the energy per particle at which the entropy becomes negative. Is there any meaning to this temperature?

Taking derivatives we find

\begin{aligned}\frac{1}{{T}} &= \frac{\partial {}}{\partial {E}}\left( \not{{ N k_{\mathrm{B}} \ln \frac{V}{N} }} + N k_{\mathrm{B}} \frac{3}{2} \ln \left( \frac{4 \pi m E}{3 N h^2} \right) + \not{{N k_{\mathrm{B}} \frac{5}{2} }} \right) \\ &= \frac{3}{2} N k_{\mathrm{B}} \frac{1}{{E}}\end{aligned} \hspace{\stretch{1}}(1.1.2)

or

\begin{aligned}\boxed{T = \frac{2}{3} \frac{E}{N k_{\mathrm{B}} }}\end{aligned} \hspace{\stretch{1}}(1.1.3)

The energies for which the entropy is negative are given by

\begin{aligned}\left( \frac{4 \pi m E}{3 N h^2} \right)^{3/2}\le \frac{N}{V} e^{-5/2},\end{aligned} \hspace{\stretch{1}}(1.1.4)

or

\begin{aligned}E &\le \frac{3 N h^2}{4 \pi m} \left( \frac{N}{V e^{5/2}} \right)^{2/3} \\ &= \frac{3 h^2 N^{5/3}}{4 \pi m V^{2/3} e^{5/2}}.\end{aligned} \hspace{\stretch{1}}(1.1.5)

In terms of the temperature $T$ this negative entropy condition is given by

\begin{aligned}\not{{\frac{3 N}{2}}} k_{\mathrm{B}} T \le \not{{\frac{3 N}{2}}} \left( \frac{ N}{V} \right)^{2/3} \frac{h^2}{e^{5/2}},\end{aligned} \hspace{\stretch{1}}(1.1.6)

or

\begin{aligned}\boxed{\frac{\sqrt{2 \pi m k_{\mathrm{B}} T}}{h} \le \left( \frac{N}{V} \right)^{1/3} \frac{1}{{e^{5/4}}}.}\end{aligned} \hspace{\stretch{1}}(1.1.7)

There will be a particle density $V/N$ for which this distance $h/\sqrt{2 \pi m k_{\mathrm{B}} T}$ will start approaching the distance between atoms. This distance constrains the validity of the ideal gas law entropy equation. Putting this quantity back into the entropy eq. 1.1.1 we have

\begin{aligned}\frac{S}{N k_{\mathrm{B}}} = \ln \frac{V}{N} \left( \frac{\sqrt{2 \pi m k_{\mathrm{B}} T}}{h} \right)^3 + \frac{5}{2}\end{aligned} \hspace{\stretch{1}}(1.1.8)

We see that a positive entropy requirement puts a bound on this distance (as a function of temperature) since we must also have

\begin{aligned}\frac{h}{\sqrt{2 \pi m k_{\mathrm{B}} T}} \ll \left( \frac{V}{N} \right)^{1/3},\end{aligned} \hspace{\stretch{1}}(1.1.9)

for the gas to be in the classical domain. I’d actually expect a gas to liquefy before this transition point, making such a low temperature nonphysical. To get a feel for whether this is likely the case, we should expect that the logarithm argument to be

\begin{aligned}\frac{V}{N} \left( \frac{\sqrt{2 \pi m k_{\mathrm{B}} T}}{h} \right)^3\end{aligned} \hspace{\stretch{1}}(1.1.10)

at the point where gasses liquefy (at which point we assume the ideal gas law is no longer accurate) to be well above unity. Checking this for 1 liter of a gas with $10^23$ atoms for hydrogen, helium, and neon respectively we find the values for eq. 1.1.10 are

\begin{aligned}173.682, 130.462, 23993.\end{aligned} \hspace{\stretch{1}}(1.1.11)

At least for these first few cases we see that the ideal gas law has lost its meaning well before the temperatures below which the entropy would become negative.

## Question: Ideal gas thermodynamics

An ideal gas starts at $(V_0, P_0)$ in the pressure-volume diagram (x-axis = $V$, y-axis = $P$), then moves at constant pressure to a larger volume $(V_1, P_0)$, then moves to a larger pressure at constant volume to $(V_1, P_1)$, and finally returns to $(V_0, P_0)$, thus undergoing a cyclic process (forming a triangle in $P-V$ plane). For each step, find the work done on the gas, the change in energy content, and heat added to the gas. Find the total work/energy/heat change over the entire cycle.

Our process is illustrated in fig. 1.1.

Fig 1.1: Cyclic pressure volume process

Step 1
This problem is somewhat underspecified. From the ideal gas law, regardless of how the gas got from the initial to the final states, we have

\begin{aligned}P_0 V_0 = N_0 k_{\mathrm{B}} T_0\end{aligned} \hspace{\stretch{1}}(1.0.12a)

\begin{aligned}P_0 V_1 = N_1 k_{\mathrm{B}} T_1\end{aligned} \hspace{\stretch{1}}(1.0.12b)

So a volume increase with fixed $P$ implies that there is a corresponding increase in $N T$. We could have for example, an increase in the number of particles, as in the evaporation process illustrated of fig. 1.2, where a piston held down by (fixed) atmospheric pressure is pushed up as the additional gas boils off.

Fig 1.2: Evaporation process under (fixed) atmospheric pressure

Alternately, we could have a system such as that of fig. 1.3, with a fixed amount of gas is in contact with a heat source that supplies the energy required to induce the required increase in temperature.

Fig 1.3: Gas of fixed mass absorbing heat

Regardless of the source of the energy that accounts for the increase in volume the work done on the gas (a negation of the positive work the gas is performing on the system, perhaps a piston as in the picture) is

\begin{aligned}d W_1 = - \int_{V_0}^{V_1} p dV = -P_0 (V_1 - V_0).\end{aligned} \hspace{\stretch{1}}(1.0.13)

Let’s now assume that we have the second sort of configuration above, where the total amount of gas is held fixed. From the ideal gas relations of eq. 1.0.12.12, and with $\Delta V = V_1 - V_0$, $\Delta T = T_1 - T_0$, and $N_1 = N_0 = N$, we have

\begin{aligned}P_0 \Delta V = N k_{\mathrm{B}} \Delta T.\end{aligned} \hspace{\stretch{1}}(1.0.14)

The change in energy of the ideal gas, assuming three degrees of freedom, is

\begin{aligned}d U = \frac{3}{2} N k_{\mathrm{B}} \Delta T = \frac{3}{2} P_0 \Delta V.\end{aligned} \hspace{\stretch{1}}(1.0.15)

The energy balance then requires that the total heat absorbed by the gas must include that portion that has done work on the system, plus the excess kinetic energy of the gas. That is

\begin{aligned}d Q_1 &= \frac{3}{2} P_0 \Delta V + P_0 \Delta V \\ &= \frac{5}{2} P_0 \Delta V.\end{aligned} \hspace{\stretch{1}}(1.0.16)

Step 2

For this leg of the cycle we have no work done on the gas

\begin{aligned}d W_2 = -\int_{V_1}^{V_1} P dV = 0.\end{aligned} \hspace{\stretch{1}}(1.0.17)

We do, however have a change in energy. The energy of the gas is

\begin{aligned}U = \frac{3}{2} N k_b T = \frac{3}{2} P V.\end{aligned} \hspace{\stretch{1}}(1.0.18)

With $\Delta P = P_1 - P_0$, the change of energy of the gas, the total heat absorbed by the gas, is

\begin{aligned}dU_2 = d Q_2 = \frac{3}{2} V_1 \Delta P.\end{aligned} \hspace{\stretch{1}}(1.0.19)

Step 3

For the final leg of the cycle, the work done on the gas is

\begin{aligned}d W_3 &= -\int_{V_1}^{V_0} P dV \\ &= \int_{V_0}^{V_1} P dV \\ &= \Delta V \frac{P_0 + P_1}{2}.\end{aligned} \hspace{\stretch{1}}(1.0.20)

This is positive this time
Unlike the first part of the cycle, the work done on the gas is positive this time (work is being done on the gas to both compress it). The change in energy of the gas, however, is negative, with the difference between final and initial energy being

\begin{aligned}dU_3 &= \frac{3}{2} (P_0 V_0 - P_1 V_1) \\ &= -\frac{3}{2} (P_1 V_1 - P_0 V_0) <0.\end{aligned} \hspace{\stretch{1}}(1.0.21)

The simultaneous compression and the pressure reduction require energy to be removed from the gas. We must have a negative change in heat $d Q < 0$, with heat emitted in this phase of the cycle. This can be verified explicitly

\begin{aligned}d Q_3 &= dU - d W \\ &= -\frac{3}{2} (P_1 V_1 - P_0 V_0) - \frac{1}{{2}} \Delta V (P_1 + P_0)< 0.\end{aligned} \hspace{\stretch{1}}(1.0.22)

Changes over the complete cycle.

Summarizing the results from each of the phases, we have

\begin{aligned}d W_1 = -P_0 \Delta V\end{aligned} \hspace{\stretch{1}}(1.0.23a)

\begin{aligned}d Q_1 = \frac{5}{2} P_0 \Delta V \end{aligned} \hspace{\stretch{1}}(1.0.23b)

\begin{aligned}d U_1 = \frac{3}{2} P_0 \Delta V \end{aligned} \hspace{\stretch{1}}(1.0.23c)

\begin{aligned}d W_2 = 0 \end{aligned} \hspace{\stretch{1}}(1.0.24a)

\begin{aligned}d Q_2 = \frac{3}{2} V_1 \Delta P \end{aligned} \hspace{\stretch{1}}(1.0.24b)

\begin{aligned}d U_2 = \frac{3}{2} V_1 \Delta P \end{aligned} \hspace{\stretch{1}}(1.0.24c)

\begin{aligned}d W_3 = \Delta V \frac{P_0 + P_1}{2} \end{aligned} \hspace{\stretch{1}}(1.0.25a)

\begin{aligned}d Q_3 = -\frac{1}{{2}} ( 3(P_1 V_1 - P_0 V_0) + \Delta V (P_1 + P_0)) \end{aligned} \hspace{\stretch{1}}(1.0.25b)

\begin{aligned}d U_3 = -\frac{3}{2} ( P_1 V_1 - P_0 V_0 )\end{aligned} \hspace{\stretch{1}}(1.0.25c)

Summing the changes in the work we have

\begin{aligned}\sum_{i = 1}^3 d W_i = \frac{1}{{2}} \Delta V \Delta P > 0.\end{aligned} \hspace{\stretch{1}}(1.0.26)

This is the area of the triangle, as expected. Since it is positive, there is net work done on the gas.

We expect the energy changes to sum to zero, and this can be verified explicitly finding

\begin{aligned}\sum_{i = 1}^3 d U_i &= \frac{3}{2} P_0 \Delta V -\frac{3}{2} ( P_1 V_1 - P_0 V_0 ) \\ &= 0.\end{aligned} \hspace{\stretch{1}}(1.0.27)

With net work done on the gas and no change in energy, there should be no net heat absorption by the gas, with a total change in heat that should equal, in amplitude, the total work done on the gas. This is confirmed by summation

\begin{aligned}\sum_{i = 1}^3 d Q_i &= \frac{5}{2} P_0 \Delta V +\frac{3}{2} V_1 \Delta P -\frac{1}{{2}} ( 3(P_1 V_1 - P_0 V_0) + \Delta V (P_1 + P_0)) \\ &= -\frac{1}{{2}} \Delta P \Delta V.\end{aligned} \hspace{\stretch{1}}(1.0.28)

## Question: Adiabatic process for an Ideal Gas

Show that when an ideal monoatomic gas expands adiabatically, the temperature and pressure are related by

\begin{aligned}\frac{dT}{dP}=\frac{2}{5}\frac{T}{P}\end{aligned} \hspace{\stretch{1}}(1.0.29)

From (3.34b) of [1], we find that the Adiabatic condition can be expressed algebraically as

\begin{aligned}0 = d Q = T dS = dU + P dV.\end{aligned} \hspace{\stretch{1}}(1.0.30)

With

\begin{aligned}U = \frac{3}{2} N k_{\mathrm{B}} T = \frac{3}{2} P V,\end{aligned} \hspace{\stretch{1}}(1.0.31)

this is

\begin{aligned}0 &= \frac{3}{2} V dP + \frac{3}{2} P dV + P dV \\ &= \frac{3}{2} V dP + \frac{5}{2} P dV.\end{aligned} \hspace{\stretch{1}}(1.0.32)

Dividing through by $P V$, this becomes a perfect differential, and we can integrate

\begin{aligned}0 &= 3 \int \frac{dP }{P}+ 5 \int \frac{dV}{V} \\ &= 3 \ln P + 5 \ln V + \ln C \\ &= 3 \ln PV + 2 \ln V + \ln C \\ &= \ln (N k_{\mathrm{B}} T)^3 + \ln \left( \frac{N k_{\mathrm{B}} T}{P} \right)^2 + \ln C.\end{aligned} \hspace{\stretch{1}}(1.0.33)

Exponentiating yields

\begin{aligned}T^5 = C' P^2.\end{aligned} \hspace{\stretch{1}}(1.0.34)

The desired relation follows by taking derivatives

\begin{aligned}2 C' P &= 5 T^4 \frac{dT}{dP} \\ &= 5 C' \frac{P^2}{T} \frac{dT}{dP},\end{aligned} \hspace{\stretch{1}}(1.0.35)

or

\begin{aligned}\frac{dT}{dP} =\frac{2}{5} \frac{T}{P},\end{aligned} \hspace{\stretch{1}}(1.0.36)

as desired.

# References

[1] C. Kittel and H. Kroemer. Thermal physics. WH Freeman, 1980.

## An updated compilation of notes, for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 3, 2013

That compilation now all of the following too (no further updates will be made to any of these) :

February 28, 2013 Rotation of diatomic molecules

February 28, 2013 Helmholtz free energy

February 26, 2013 Statistical and thermodynamic connection

February 24, 2013 Ideal gas

February 16, 2013 One dimensional well problem from Pathria chapter II

February 15, 2013 1D pendulum problem in phase space

February 14, 2013 Continuing review of thermodynamics

February 13, 2013 Lightning review of thermodynamics

February 11, 2013 Cartesian to spherical change of variables in 3d phase space

February 10, 2013 n SHO particle phase space volume

February 10, 2013 Change of variables in 2d phase space

February 10, 2013 Some problems from Kittel chapter 3

February 07, 2013 Midterm review, thermodynamics

February 06, 2013 Limit of unfair coin distribution, the hard way

February 05, 2013 Ideal gas and SHO phase space volume calculations

February 03, 2013 One dimensional random walk

February 02, 2013 1D SHO phase space

February 02, 2013 Application of the central limit theorem to a product of random vars

January 31, 2013 Liouville’s theorem questions on density and current

January 30, 2013 State counting

## Rotation of diatomic molecules

Posted by peeterjoot on February 28, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

## Question: Rotation of diatomic molecules ([2] problem 3.6)

In our first look at the ideal gas we considered only the translational energy of the particles. But molecules can rotate, with kinetic energy. The rotation motion is quantized; and the energy levels of a diatomic molecule are of the form

\begin{aligned}\epsilon(j) = j(j + 1) \epsilon_0\end{aligned} \hspace{\stretch{1}}(1.0.1)

where $j$ is any positive integer including zero: $j = 0, 1, 2, \cdots$. The multiplicity of each rotation level is $g(j) = 2 j + 1$.

### a

Find the partition function $Z_R(\tau)$ for the rotational states of one molecule. Remember that $Z$ is a sum over all states, not over all levels — this makes a difference.

### b

Evaluate $Z_R(\tau)$ approximately for $\tau \gg \epsilon_0$, by converting the sum to an integral.

### c

Do the same for $\tau \ll \epsilon_0$, by truncating the sum after the second term.

### d

Give expressions for the energy $U$ and the heat capacity $C$, as functions of $\tau$, in both limits. Observe that the rotational contribution to the heat capacity of a diatomic molecule approaches 1 (or, in conventional units, $k_{\mathrm{B}}$) when $\tau \gg \epsilon_0$.

### e

Sketch the behavior of $U(\tau)$ and $C(\tau)$, showing the limiting behaviors for $\tau \rightarrow \infty$ and $\tau \rightarrow 0$.

### a. Partition function $Z_R(\tau)$

To understand the reference to multiplicity recall (section 4.13 [1]) that the rotational Hamiltonian was of the form

\begin{aligned}H = \frac{\mathbf{L}^2}{2 M r^2},\end{aligned} \hspace{\stretch{1}}(1.0.2)

where the $\mathbf{L}^2$ eigenvectors satisfied

\begin{subequations}

\begin{aligned}\mathbf{L}^2 {\left\lvert {l m} \right\rangle} = l (l + 1) \hbar^2 {\left\lvert {l m} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.3a)

\begin{aligned}L_z {\left\lvert {l m} \right\rangle} = m \hbar {\left\lvert {l m} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.3b)

\end{subequations}

and $-l \le m \le l$, where $l \ge 0$ is a positive integer. We see that $\epsilon_0$ is of the form

\begin{aligned}\epsilon_0 = \frac{\hbar^2}{2 M R_l(r)},\end{aligned} \hspace{\stretch{1}}(1.0.4)

and our partition function is

\begin{aligned}Z_R(\tau) = \sum_{l = 0}^\infty \sum_{m = -l}^l e^{-l (l + 1)\epsilon_0/\tau}= \sum_{l = 0}^\infty (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau}.\end{aligned} \hspace{\stretch{1}}(1.0.5)

We have no dependence on $m$ in the sum, and just have to sum terms like fig 1, and are able to sum over $m$ trivially, which is where the multiplicity comes from.

Fig 1: Summation over m

To get a feel for how many terms are significant in these sums, we refer to the plot of fig 2. We plot the partition function itself in, truncation at $l = 30$ terms in fig 3.

Fig 2: Plotting the partition function summand

Fig 3: Z_R(tau) truncated after 30 terms in log plot

### b. Evaluate partition function for large temperatures

If $\tau \gg \epsilon_0$, so that $\epsilon_0/\tau \ll 1$, all our exponentials are close to unity. Employing an integral approximation of the partition function, we can somewhat miraculously integrate this directly

\begin{aligned}Z_R(\tau) &\approx \int_0^\infty dl (2 l + 1) e^{-l(l+1)\epsilon_0/\tau} \\ &= \int_0^\infty dl \frac{d}{dl} \left( -\frac{\tau}{\epsilon_0} e^{-l(l+1)\epsilon_0/\tau} \right) \\ &= \frac{\tau}{\epsilon_0}\end{aligned} \hspace{\stretch{1}}(1.0.6)

### c. Evaluate partition function for small temperatures

When $\tau \ll \epsilon_0$, so that $\epsilon_0/\tau \gg 1$, all our exponentials are increasingly close to zero as $l$ increases. Dropping all the second and higher order terms we have

\begin{aligned}Z_R(\tau) \approx 1 + 3 e^{-2 \epsilon_0/\tau}\end{aligned} \hspace{\stretch{1}}(1.0.7)

### d. Energy and heat capacity

In the large $\epsilon_0/\tau$ domain (small temperatures) we have

\begin{aligned}U &= \tau^2 \frac{\partial {}}{\partial {\tau}} \ln Z \\ &= \tau^2 \frac{\partial {}}{\partial {\tau}} \ln \left( 1 + 3 e^{-2 \epsilon_0/\tau} \right) \\ &= \tau^2 \frac{3 (-2\epsilon_0)(-1/\tau^2)}{1 + 3 e^{-2 \epsilon_0/\tau}} \\ &= \frac{6 \epsilon_0}{1 + 3 e^{-2 \epsilon_0/\tau}} \\ &\approx 6 \epsilon_0.\end{aligned} \hspace{\stretch{1}}(1.0.8)

The specific heat in this domain is

\begin{aligned}C_{\mathrm{V}} = \frac{\partial {U}}{\partial {\tau}}=\left( \frac{6 \epsilon_0/\tau}{1 + 3 e^{-2 \epsilon_0/\tau}} \right)^2\approx \left( \frac{6 \epsilon_0}{\tau} \right)^2\end{aligned} \hspace{\stretch{1}}(1.0.9)

For the small $\epsilon_0/\tau$ (large temperatures) case we have

\begin{aligned}U = \tau^2 \frac{\partial {}}{\partial {\tau}} \ln Z= \tau^2 \frac{\partial {}}{\partial {\tau}} \ln \frac{\tau}{\epsilon_0}= \tau^2 \frac{1}{{\tau}}= \tau\end{aligned} \hspace{\stretch{1}}(1.0.10)

The heat capacity in this large temperature region is

\begin{aligned}C_{\mathrm{V}} = \frac{\partial {U}}{\partial {\tau}} = 1,\end{aligned} \hspace{\stretch{1}}(1.0.11)

which is unity as described in the problem.

### e. Sketch

The energy and heat capacities are roughly sketched in fig 4.

Fig 4: Energy and heat capacity

It’s somewhat odd seeming that we have a zero point energy at zero temperature. Plotting the energy (truncating the sums to 30 terms) in fig 5, we don’t see such a zero point energy.

Fig 5: Exact plot of the energy for a range of temperatures (30 terms of the sums retained)

That plotted energy is as follows, computed without first dropping any terms of the partition function

\begin{aligned}U &= \tau^2 \frac{\partial}{\partial \tau} \ln\left( \sum_{l = 0}^\infty (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right) \\ &= \epsilon_0\frac{\left( \sum_{l = 1}^\infty l (l + 1)(2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)}{\left( \sum_{l = 0}^\infty (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)} \\ &= \epsilon_0\frac{\left( \sum_{l = 1}^\infty l (l + 1)(2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)}{Z}\end{aligned} \hspace{\stretch{1}}(1.0.12)

To avoid the zero point energy, we have to use this and not the truncated partition function to do the integral approximation. Doing that calculation (which isn’t as convenient, so I cheated and used Mathematica). We obtain

\begin{aligned}U \approx \frac{\int_1^\infty l (l + 1)(2 l + 1) e^{-l (l + 1)\epsilon_0/\tau}}{\int_0^\infty (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau}}=\epsilon_0 e^{2 \epsilon_0/\tau} \left( 2 + \frac{\tau}{\epsilon_0} \right).\end{aligned} \hspace{\stretch{1}}(1.0.13)

This approximation, which has taken the sums to infinity, is plotted in fig 6.

Fig 6: Low temperature approximation of the energy

From eq. 1.0.12, we can take one more derivative to calculate the exact specific heat

\begin{aligned}C_{\mathrm{V}} &= \epsilon_0\frac{\partial {}}{\partial {\tau}}\left(\frac{\left( \sum_{l = 1}^\infty l (l + 1)(2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)}{\left( \sum_{l = 0}^\infty (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)}\right) \\ &= \left( \frac{\epsilon_0}{\tau} \right)^2\left(\frac{\left( \sum_{l = 1}^\infty l^2 (l + 1)^2 (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)}{\left( \sum_{l = 0}^\infty (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)}+\frac{\left( \sum_{l = 1}^\infty l (l + 1)(2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)^2}{\left( \sum_{l = 0}^\infty (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)^2}\right) \\ &= \left( \frac{\epsilon_0}{\tau} \right)^2\left(\frac{\left( \sum_{l = 1}^\infty l^2 (l + 1)^2 (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)}{Z}+ \frac{U^2}{\epsilon_0^2}\right) \\ &= \frac{U^2}{\epsilon_0^2}+\left( \frac{\epsilon_0}{\tau} \right)^2\frac{\left( \sum_{l = 1}^\infty l^2 (l + 1)^2 (2 l + 1) e^{-l (l + 1)\epsilon_0/\tau} \right)}{Z}.\end{aligned} \hspace{\stretch{1}}(1.0.14)

This is plotted to 30 terms in fig 7.

Fig 7: Specific heat to 30 terms

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] C. Kittel and H. Kroemer. Thermal physics. WH Freeman, 1980.