Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Archive for March, 2013

PHY452H1S Basic Statistical Mechanics. Lecture 19: Bosons. Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 28, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer

Peeter’s lecture notes from class. May not be entirely coherent.

Fermions summary

We’ve considered a momentum sphere as in fig. 1.1, and performed various appromations of the occupation sums fig. 1.2.

Fig 1.1: Summation over momentum sphere

Fig 1.2: Fermion occupation

\begin{aligned}\epsilon \sim T^2\end{aligned} \hspace{\stretch{1}}(1.0.1.1)

\begin{aligned}C \sim T\end{aligned} \hspace{\stretch{1}}(1.0.1.1)

\begin{aligned}P \sim \text{constant}\end{aligned} \hspace{\stretch{1}}(1.0.1.1)

The physics of Fermi gases has an extremely wide range of applicability. Illustrating some of this range, here are some examples of Fermi temperatures (from E_{\mathrm{F}} = k_{\mathrm{B}} T_{\mathrm{F}})

  1. Electrons in copper: T_{\mathrm{F}} \sim 10^4 \mbox{K}
  2. Neutrons in neutron star: T_{\mathrm{F}} \sim 10^7 - 10^8 \mbox{K}
  3. Ultracold atomic gases: T_{\mathrm{F}} \sim (10 - 100) \mbox{n K}

Bosons

We’d like to work with a fixed number of particles, but the calculations are hard, so we move to the grand canonical ensemble

\begin{aligned}n_{\mathrm{B}}(\mathbf{k}) = \frac{1}{{ e^{\beta(\epsilon_\mathbf{k} - \mu)} - 1 }}\end{aligned} \hspace{\stretch{1}}(1.2)

Again, we’ll consider free particles with energy as in fig. 1.3, or

\begin{aligned}\epsilon_\mathbf{k} = \frac{\hbar^2 k^2}{2 m}.\end{aligned} \hspace{\stretch{1}}(1.3)

Fig 1.3: Free particle energy momentum distribution

 

Again introducing fugacity z = e^{\beta \mu}, we have

\begin{aligned}n_{\mathrm{B}}(\mathbf{k}) = \frac{1}{{ z^{-1} e^{\beta \epsilon_\mathbf{k}} - 1 }}\end{aligned} \hspace{\stretch{1}}(1.4)

We’ll consider systems for which

\begin{aligned}N = \sum_\mathbf{k} n_{\mathrm{B}}(\mathbf{k}) = \text{fixed}\end{aligned} \hspace{\stretch{1}}(1.5)

Observe that at large energies we have

\begin{aligned}n_{\mathrm{B}}(\text{large} \, \mathbf{k}) \sim z e^{-\beta \epsilon_\mathbf{k}}\end{aligned} \hspace{\stretch{1}}(1.6)

For small energies

\begin{aligned}n_{\mathrm{B}}(\mathbf{k} \rightarrow 0) \sim \frac{1}{{z^{-1} - 1}} = \frac{z}{1 - z}\end{aligned} \hspace{\stretch{1}}(1.7)

Observe that we require z < 1 (or \mu < 0) so that the number distribution is strictly positive for all energies. This tells us that the fugacity is a function of temperature, but there will be a point at which it must saturate. This is illustrated in fig. 1.4.

Fig 1.4: Density times cubed thermal de Broglie wavelength

 

Let’s calculate this density (assumed fixed for all temperatures)

\begin{aligned}\rho &= \frac{N}{V} \\ &= \int \frac{d^3 \mathbf{k}}{(2 \pi)^3} \frac{1}{{z^{-1} e^{\beta \epsilon_\mathbf{k}} -1 }} \\ &= \frac{2}{(2 \pi)^2} \int_0^\infty k^2 dk \frac{1}{{z^{-1} e^{\beta \hbar^2 k^2/2m} -1 }} \\ &= \frac{2}{(2 \pi)^2} \left( \frac {2 m} {\beta \hbar^2} \right)^{3/2}\int_0^\infty \left( \frac {\beta \hbar^2} {2 m} \right)^{3/2}k^2 dk \frac{1}{{z^{-1} e^{\beta \hbar^2 k^2/2m} -1 }}\end{aligned} \hspace{\stretch{1}}(1.8)

With the substitution

\begin{aligned}x^2 = \beta \frac{\hbar^2 k^2}{2m},\end{aligned} \hspace{\stretch{1}}(1.9)

we find

\begin{aligned}\rho \lambda^3 &= \frac{2}{(2 \pi)^2} \left( \frac {2 \not{{m}}} {\not{{\beta \hbar^2}}} \right)^{3/2}\left( \frac{ 2 \pi \not{{\hbar^2 \beta}}}{\not{{m}}} \right)^{3/2}\int_0^\infty x^2 dx \frac{1}{{z^{-1} e^{x^2} -1 }} \\ &= \frac{4}{\sqrt{\pi}} \int_0^\infty dx \frac{x^2}{z^{-1} e^{x^2} - 1 } \\ &\equiv g_{3/2}(z).\end{aligned} \hspace{\stretch{1}}(1.10)

This implicitly defines a relationship for the fugacity as a function of temperature z = z(T).

It can be shown that

\begin{aligned}g_{3/2}(z) = z + \frac{z^2}{2^{3/2}}+ \frac{z^3}{3^{3/2}}+ \cdots\end{aligned} \hspace{\stretch{1}}(1.11)

As z \rightarrow 1 we end up with a zeta function, for which we can look up the value

\begin{aligned}g_{3/2}(z \rightarrow 1) = \sum_{n = 1}^\infty \frac{1}{{n^{3/2}}} = \zeta(3/2) \approx 2.612\end{aligned} \hspace{\stretch{1}}(1.12)

where the Riemann zeta function is defined as

\begin{aligned}\zeta(s) = \sum_{ n = 1 } \frac{1}{{n^s}}.\end{aligned} \hspace{\stretch{1}}(1.13)

\begin{aligned}g_{3/2}(z) = \rho \lambda^3\end{aligned} \hspace{\stretch{1}}(1.14)

At high temperatures we have

\begin{aligned}\rho \lambda^3 \rightarrow 0\end{aligned} \hspace{\stretch{1}}(1.15)

(as T does down, \rho \lambda^3 goes up)

Looking at g_{3/2}(z = 1) = \rho \lambda^3(T_{\mathrm{c}}) leads to

\begin{aligned}\boxed{k_{\mathrm{B}} T_{\mathrm{c}} = \left( \frac{\rho}{\zeta(3/2)} \right)^{2/3} \frac{ 2 \pi \hbar^2}{m}.}\end{aligned} \hspace{\stretch{1}}(1.16)

How do I satisfy number conservation?

We have a problem here since as T \rightarrow 0 the 1/\lambda^3 \sim T^{3/2} term in \rho above drops to zero, yet g_{3/2}(z) cannot keep increasing without bounds to compensate and keep the density fixed. The way to deal with this was worked out by

  1. Bose (1924) for photons (examining statistics for symmetric wave functions).
  2. Einstein (1925) for conserved particles.

To deal with this issue, we (somewhat arbitrarily, because we need to) introduce a non-zero density for \mathbf{k} = 0. This is an adjustment of the approximation so that we have

\begin{aligned}\sum_{\mathbf{k}} \rightarrow \int \frac{d^3 \mathbf{k}}{(2 \pi)^3} \qquad \mbox{Except around k = 0},\end{aligned} \hspace{\stretch{1}}(1.17)

as in fig. 1.5, so that

Fig 1.5: Momentum sphere with origin omitted

 

\begin{aligned}\sum_\mathbf{k} = \left( \mbox{Contribution at k = 0} \right)+ V \int \frac{d^3 \mathbf{k}}{(2 \pi)^3}.\end{aligned} \hspace{\stretch{1}}(1.18)

Given this, we have

\begin{aligned}N= N_{\mathbf{k} = 0}+ V \int \frac{d^3 \mathbf{k}}{(2 \pi)^3} n_{\mathrm{B}}(\mathbf{k})\end{aligned} \hspace{\stretch{1}}(1.19)

We can illustrate this as in fig. 1.6.

Fig 1.6: Boson occupation vs momentum

 

\begin{aligned}\rho= \rho_{\mathbf{k} = 0}+ \frac{1}{{\lambda^3}} g_{3/2}(z)= \rho_{\mathbf{k} = 0}+ \frac{ \lambda(T_{\mathrm{c}}) }{ \lambda(T)}\frac{1}{{ \lambda^3(T_{\mathrm{c}})}}g_{3/2}(z)\end{aligned} \hspace{\stretch{1}}(1.20)

At T > T_{\mathrm{c}} we have \rho_{\mathbf{k} = 0}, whereas at T < T_{\mathrm{c}} we must introduce a non-zero density if we want to be able to keep a constant density constraint.

Posted in Math and Physics Learning. | Tagged: , , , , , , , | Leave a Comment »

An updated compilation of notes, for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 27, 2013

Here’s my second update of my notes compilation for this course, including all of the following:

March 27, 2013 Fermi gas

March 26, 2013 Fermi gas thermodynamics

March 26, 2013 Fermi gas thermodynamics

March 23, 2013 Relativisitic generalization of statistical mechanics

March 21, 2013 Kittel Zipper problem

March 18, 2013 Pathria chapter 4 diatomic molecule problem

March 17, 2013 Gibbs sum for a two level system

March 16, 2013 open system variance of N

March 16, 2013 probability forms of entropy

March 14, 2013 Grand Canonical/Fermion-Bosons

March 13, 2013 Quantum anharmonic oscillator

March 12, 2013 Grand canonical ensemble

March 11, 2013 Heat capacity of perturbed harmonic oscillator

March 10, 2013 Langevin small approximation

March 10, 2013 Addition of two one half spins

March 10, 2013 Midterm II reflection

March 07, 2013 Thermodynamic identities

March 06, 2013 Temperature

March 05, 2013 Interacting spin

plus everything detailed in the description of my first update and before.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment »

PHY452H1S Basic Statistical Mechanics. Lecture 16: Fermi gas. Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 27, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer

Peeter’s lecture notes from class. May not be entirely coherent.

Fermi gas

Review

Continuing a discussion of [1] section 8.1 content.

We found

\begin{aligned}n_{\mathbf{k}} = \frac{1}{{e^{\beta(\epsilon_k - \mu)} + 1}}\end{aligned} \hspace{\stretch{1}}(1.2.1)

With no spin

\begin{aligned}\int n_\mathbf{k} \times \frac{d^3 k}{(2\pi)^3} = \rho\end{aligned} \hspace{\stretch{1}}(1.2.2)

Fig 1.1: Occupancy at low temperature limit

 

Fig 1.2: Volume integral over momentum up to Fermi energy limit

 

\begin{aligned}\epsilon_{\mathrm{F}} = \frac{\hbar^2 k_{\mathrm{F}}^2}{2m}\end{aligned} \hspace{\stretch{1}}(1.2.3)

gives

\begin{aligned}k_{\mathrm{F}} = (6 \pi^2 \rho)^{1/3}\end{aligned} \hspace{\stretch{1}}(1.2.4)

\begin{aligned}\sum_\mathbf{k} n_\mathbf{k} = N\end{aligned} \hspace{\stretch{1}}(1.2.5)

\begin{aligned}\mathbf{k} = \frac{2\pi}{L}(n_x, n_y, n_z)\end{aligned} \hspace{\stretch{1}}(1.2.6)

This is for periodic boundary conditions \footnote{I filled in details in the last lecture using a particle in a box, whereas this periodic condition was intended. We see that both achieve the same result}, where

\begin{aligned}\Psi(x + L) = \Psi(x)\end{aligned} \hspace{\stretch{1}}(1.2.7)

Moving on

\begin{aligned}\sum_{k_x} n(\mathbf{k}) = \sum_{p_x} \Delta p_x n(\mathbf{k})\end{aligned} \hspace{\stretch{1}}(1.2.8)

with

\begin{aligned}\Delta k_x = \frac{2 \pi}{L} \Delta p_x\end{aligned} \hspace{\stretch{1}}(1.2.9)

this gives

\begin{aligned}\sum_{k_x} n(\mathbf{k}) = \sum_{n_x} \frac{L}{2\pi} \Delta k_x \rightarrow \frac{L}{2\pi} \int d k_x\end{aligned} \hspace{\stretch{1}}(1.2.10)

Over all dimensions

\begin{aligned}\sum_{\mathbf{k}} n_\mathbf{k} = \left( \frac{L}{2\pi} \right)^3 \left( \int d^3 \mathbf{k} \right)n(\mathbf{k})=N\end{aligned} \hspace{\stretch{1}}(1.2.11)

so that

\begin{aligned}\rho = \int \frac{d^3 \mathbf{k}}{(2 \pi)^3}\end{aligned} \hspace{\stretch{1}}(1.2.12)

Again

\begin{aligned}k_{\mathrm{F}} = (6 \pi^2 \rho)^{1/3}\end{aligned} \hspace{\stretch{1}}(1.2.13)

Example: Spin considerations

{example:basicStatMechLecture16:1}{

\begin{aligned}\sum_{\mathbf{k}, m_s} = N\end{aligned} \hspace{\stretch{1}}(1.2.14)

\begin{aligned}\sum_{\mathbf{k}, m_s} \frac{1}{{e^{\beta(\epsilon_k - \mu)} + 1}} = (2 S + 1)\left( \int \frac{d^3 \mathbf{k}}{(2 \pi)^3} n(\mathbf{k}) \right)L^3\end{aligned} \hspace{\stretch{1}}(1.2.15)

This gives us

\begin{aligned}k_{\mathrm{F}} = \left( \frac{ 6 \pi^2 \rho }{2 S + 1} \right)^{1/3}\end{aligned} \hspace{\stretch{1}}(1.2.16)

and again

\begin{aligned}\epsilon_{\mathrm{F}} = \frac{\hbar^2 k_{\mathrm{F}}^2}{2m}\end{aligned} \hspace{\stretch{1}}(1.2.17)

}

High Temperatures

Now we want to look at the at higher temperature range, where the occupancy may look like fig. 1.3

Fig 1.3: Occupancy at higher temperatures

 

\begin{aligned}\mu(T = 0) = \epsilon_{\mathrm{F}}\end{aligned} \hspace{\stretch{1}}(1.2.18)

\begin{aligned}\mu(T \rightarrow \infty) \rightarrow - \infty\end{aligned} \hspace{\stretch{1}}(1.2.19)

so that for large T we have

\begin{aligned}\frac{1}{{e^{\beta(\epsilon_k - \mu)} + 1}} \rightarrow e^{-\beta(\epsilon_k - \mu)}\end{aligned} \hspace{\stretch{1}}(1.2.20)

\begin{aligned}\rho &= \int \frac{d^3 \mathbf{k}}{(2 \pi)^3} e^{\beta \mu} e^{-\beta \epsilon_k} \\ &= e^{\beta \mu} \int \frac{d^3 \mathbf{k}}{(2 \pi)^3} e^{-\beta \epsilon_k} \\ &= e^{\beta \mu} \int dk \frac{4 \pi k^2}{(2 \pi)^3} e^{-\beta \hbar^2 k^2/2m}.\end{aligned} \hspace{\stretch{1}}(1.2.21)

Mathematica (or integration by parts) tells us that

\begin{aligned}\frac{1}{{(2 \pi)^3}} \int 4 \pi^2 k^2 dk e^{-a k^2} = \frac{1}{{(4 \pi a )^{3/2}}},\end{aligned} \hspace{\stretch{1}}(1.2.22)

so we have

\begin{aligned}\rho &= e^{\beta \mu} \left( \frac{2m}{ 4 \pi \beta \hbar^2} \right)^{3/2} \\ &= e^{\beta \mu} \left( \frac{2 m k_{\mathrm{B}} T 4 \pi^2 }{ 4 \pi h^2} \right)^{3/2} \\ &= e^{\beta \mu} \left( \frac{2 m k_{\mathrm{B}} T \pi }{ h^2} \right)^{3/2}\end{aligned} \hspace{\stretch{1}}(1.2.23)

Introducing \lambda for the thermal de Broglie wavelength, \lambda^3 \sim T^{-3/2}

\begin{aligned}\lambda \equiv \frac{h}{\sqrt{2 \pi m k_{\mathrm{B}} T}},\end{aligned} \hspace{\stretch{1}}(1.2.24)

we have

\begin{aligned}\rho = e^{\beta \mu} \frac{1}{{\lambda^3}}.\end{aligned} \hspace{\stretch{1}}(1.2.25)

Does it make any sense to have density as a function of temperature? An inappropriately extended to low temperatures plot of the density is found in fig. 1.4 for a few arbitrarily chosen numerical values of the chemical potential \mu, where we see that it drops to zero with temperature. I suppose that makes sense if we are not holding volume constant.

Fig 1.4: Density as a function of temperature

 

We can write

\begin{aligned}\boxed{e^{\beta \mu} = \left( \rho \lambda^3 \right)}\end{aligned} \hspace{\stretch{1}}(1.2.26)

\begin{aligned}\frac{\mu}{k_{\mathrm{B}} T} = \ln \left( \rho \lambda^3 \right)\sim -\frac{3}{2} \ln T\end{aligned} \hspace{\stretch{1}}(1.2.27)

or (taking \rho (and/or volume?) as a constant) we have for large temperatures

\begin{aligned}\mu \propto -T \ln T\end{aligned} \hspace{\stretch{1}}(1.2.28)

The chemical potential is plotted in fig. 1.5, whereas this - k_{\mathrm{B}} T \ln k_{\mathrm{B}} T function is plotted in fig. 1.6. The contributions to \mu from the k_{\mathrm{B}} T \ln (\rho h^3 (2 \pi m)^{-3/2}) term are dropped for the high temperature approximation.

Fig 1.5: Chemical potential over degenerate to classical range

Fig 1.6: High temp approximation of chemical potential, extended back to T = 0

Pressure

\begin{aligned}P = - \frac{\partial {E}}{\partial {V}}\end{aligned} \hspace{\stretch{1}}(1.2.29)

For a classical ideal gas as in fig. 1.7 we have

Fig 1.7: Ideal gas pressure vs volume

 

\begin{aligned}P = \rho k_{\mathrm{B}} T\end{aligned} \hspace{\stretch{1}}(1.2.30)

For a Fermi gas at T = 0 we have

\begin{aligned}E &= \sum_\mathbf{k} \epsilon_k n_k \\ &= \sum_\mathbf{k} \epsilon_k \Theta(\mu_0 - \epsilon_k) \\ &= \frac{V}{(2\pi)^3} \int_{\epsilon_k < \mu_0} \frac{\hbar^2 \mathbf{k}^2}{2 m} d^3 \mathbf{k} \\ &= \frac{V}{(2\pi)^3} \int_0^{k_{\mathrm{F}}} \frac{\hbar^2 \mathbf{k}^2}{2 m} d^3 \mathbf{k} \\ &= \frac{V}{(2\pi)^3} \frac{\hbar^2}{2 m} \int_0^{k_{\mathrm{F}}} k^2 4 \pi k^2 d k\propto k_{\mathrm{F}}^5\end{aligned} \hspace{\stretch{1}}(1.2.31)

Specifically,

\begin{aligned}E(T = 0) = V \times \frac{3}{5} \underbrace{\epsilon_{\mathrm{F}}}_{\sim k_{\mathrm{F}}^2}\underbrace{\rho}_{\sim k_{\mathrm{F}}^3}\end{aligned} \hspace{\stretch{1}}(1.2.32)

or

\begin{aligned}\frac{E}{N} = \frac{3}{5} \epsilon_{\mathrm{F}}\end{aligned} \hspace{\stretch{1}}(1.2.33)

\begin{aligned}E = \frac{3}{5} N \frac{\hbar^2}{2 m} \left( 6 \pi^2 \frac{N}{V} \right)^{2/3} = a V^{-2/3},\end{aligned} \hspace{\stretch{1}}(1.2.34)

so that

\begin{aligned}\frac{\partial {E}}{\partial {V}} = -\frac{2}{3} a V^{-5/3}.\end{aligned} \hspace{\stretch{1}}(1.2.35)

\begin{aligned}P &= -\frac{\partial {E}}{\partial {V}}  \\ &= \frac{2}{3} \left( a V^{-2/3} \right)V^{-1} \\ &= \frac{2}{3} \frac{E}{V} \\ &= \frac{2}{3} \left( \frac{3}{5} \epsilon_{\mathrm{F}} \rho \right) \\ &= \frac{2}{5} \epsilon_{\mathrm{F}} \rho.\end{aligned} \hspace{\stretch{1}}(1.2.36)

We see that the pressure ends up deviating from the classical result at low temperatures, as sketched in fig. 1.8. This low temperature limit for the pressure 2 \epsilon_{\mathrm{F}} \rho/5 is called the degeneracy pressure.

Fig 1.8: Fermi degeneracy pressure

 

References

[1] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , | Leave a Comment »

PHY452H1S Basic Statistical Mechanics. Lecture 18: Fermi gas thermodynamics. Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 26, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer

Peeter’s lecture notes from class. May not be entirely coherent.

Review

Last time we found that the low temperature behaviour or the chemical potential was quadratic as in fig. 1.1.

\begin{aligned}\mu =\mu(0) - a \frac{T^2}{T_{\mathrm{F}}}\end{aligned} \hspace{\stretch{1}}(1.1.1)

Fig 1.1: Fermi gas chemical potential

 

Specific heat

\begin{aligned}E = \sum_\mathbf{k} n_{\mathrm{F}}(\epsilon_\mathbf{k}, T) \epsilon_\mathbf{k}\end{aligned} \hspace{\stretch{1}}(1.1.2)

\begin{aligned}\frac{E}{V} &= \frac{1}{{(2\pi)^3}} \int d^3 \mathbf{k} n_{\mathrm{F}}(\epsilon_\mathbf{k}, T) \epsilon_\mathbf{k} \\ &= \int d\epsilon N(\epsilon) n_{\mathrm{F}}(\epsilon, T) \epsilon,\end{aligned} \hspace{\stretch{1}}(1.1.3)

where

\begin{aligned}N(\epsilon) = \frac{1}{{4 \pi^2}}\left( \frac{2m}{\hbar^2} \right)^{3/2}\sqrt{\epsilon}.\end{aligned} \hspace{\stretch{1}}(1.1.4)

Low temperature C_{\mathrm{V}}

\begin{aligned}\frac{\Delta E(T)}{V}=\int_0^\infty d\epsilon N(\epsilon)\left( n_{\mathrm{F}}(\epsilon, T) - n_{\mathrm{F}}(\epsilon, 0) \right)\end{aligned} \hspace{\stretch{1}}(1.1.5)

The only change in the distribution fig. 1.2, that is of interest is over the step portion of the distribution, and over this range of interest N(\epsilon) is approximately constant as in fig. 1.3.

Fig 1.2: Fermi distribution

Fig 1.3: Fermi gas density of states

\begin{aligned}N(\epsilon) \approx  N(\mu)\end{aligned} \hspace{\stretch{1}}(1.0.6a)

\begin{aligned}\mu \approx  \epsilon_{\mathrm{F}},\end{aligned} \hspace{\stretch{1}}(1.0.6b)

so that

\begin{aligned}\Delta e \equiv\frac{\Delta E(T)}{V}\approx N(\epsilon_{\mathrm{F}})\int_0^\infty d\epsilon\left( n_{\mathrm{F}}(\epsilon, T) - n_{\mathrm{F}}(\epsilon, 0) \right)=N(\epsilon_{\mathrm{F}})\int_{-\epsilon_{\mathrm{F}}}^\infty d x (\epsilon_{\mathrm{F}} + x)\left( n_{\mathrm{F}}(\epsilon + x, T) - n_{\mathrm{F}}(\epsilon_{\mathrm{F}} + x, 0) \right).\end{aligned} \hspace{\stretch{1}}(1.0.7)

Here we’ve made a change of variables \epsilon = \epsilon_{\mathrm{F}} + x, so that we have near cancelation of the \epsilon_{\mathrm{F}} factor

\begin{aligned}\Delta e &= N(\epsilon_{\mathrm{F}})\epsilon_{\mathrm{F}}\int_{-\epsilon_{\mathrm{F}}}^\infty d x \underbrace{\left( n_{\mathrm{F}}(\epsilon + x, T) - n_{\mathrm{F}}(\epsilon_{\mathrm{F}} + x, 0) \right)}_{\text{almost equal everywhere}}+N(\epsilon_{\mathrm{F}})\int_{-\epsilon_{\mathrm{F}}}^\infty d x x\left( n_{\mathrm{F}}(\epsilon + x, T) - n_{\mathrm{F}}(\epsilon_{\mathrm{F}} + x, 0) \right) \\ &\approx N(\epsilon_{\mathrm{F}})\int_{-\infty}^\infty d x x\left( \frac{1}{{ e^{\beta x} +1 }} - {\left.{{\frac{1}{{ e^{\beta x} +1 }}}}\right\vert}_{{T \rightarrow 0}} \right).\end{aligned} \hspace{\stretch{1}}(1.0.8)

Here we’ve extended the integration range to -\infty since this doesn’t change much. FIXME: justify this to myself? Taking derivatives with respect to temperature we have

\begin{aligned}\frac{\delta e}{T} &= -N(\epsilon_{\mathrm{F}})\int_{-\infty}^\infty d x x\frac{1}{{(e^{\beta x} + 1)^2}}\frac{d}{dT} e^{\beta x} \\ &= N(\epsilon_{\mathrm{F}})\int_{-\infty}^\infty d x x\frac{1}{{(e^{\beta x} + 1)^2}}e^{\beta x}\frac{x}{k_{\mathrm{B}} T^2}\end{aligned} \hspace{\stretch{1}}(1.0.9)

With \beta x = y, we have for T \ll T_{\mathrm{F}}

\begin{aligned}\frac{C}{V} &= N(\epsilon_{\mathrm{F}})\int_{-\infty}^\infty \frac{ dy y^2 e^y }{ (e^y + 1)^2 k_{\mathrm{B}} T^2} (k_{\mathrm{B}} T)^3 \\ &= N(\epsilon_{\mathrm{F}}) k_{\mathrm{B}}^2 T\underbrace{\int_{-\infty}^\infty \frac{ dy y^2 e^y }{ (e^y + 1)^2 } }_{\pi^2/3} \\ &= \frac{\pi^2}{3} N(\epsilon_{\mathrm{F}}) k_{\mathrm{B}} (k_{\mathrm{B}} T).\end{aligned} \hspace{\stretch{1}}(1.0.10)

Using eq. 1.1.4 at the Fermi energy and

\begin{aligned}\frac{N}{V} = \rho\end{aligned} \hspace{\stretch{1}}(1.0.11a)

\begin{aligned}\epsilon_{\mathrm{F}} = \frac{\hbar^2 k_{\mathrm{F}}^2}{2 m}\end{aligned} \hspace{\stretch{1}}(1.0.11b)

\begin{aligned}k_{\mathrm{F}} = \left( 6 \pi^2 \rho \right)^{1/3},\end{aligned} \hspace{\stretch{1}}(1.0.11c)

we have

\begin{aligned}N(\epsilon_{\mathrm{F}}) &= \frac{1}{{4 \pi^2}}\left( \frac{2m}{\hbar^2} \right)^{3/2}\sqrt{\epsilon_{\mathrm{F}}} \\ &= \frac{1}{{4 \pi^2}}\left( \frac{2m}{\hbar^2} \right)^{3/2}\frac{\hbar k_{\mathrm{F}}}{\sqrt{2m}} \\ &= \frac{1}{{4 \pi^2}}\left( \frac{2m}{\hbar^2} \right)^{3/2}\frac{\hbar }{\sqrt{2m}} \left( 6 \pi^2 \rho \right)^{1/3} \\ &= \frac{1}{{4 \pi^2}}\left( \frac{2m}{\hbar^2} \right)\left( 6 \pi^2 \frac{N}{V} \right)^{1/3}\end{aligned} \hspace{\stretch{1}}(1.0.12)

Giving

\begin{aligned}\frac{C}{N} &= \frac{\pi^2}{3} \frac{V}{N}\frac{1}{{4 \pi^2}}\left( \frac{2m}{\hbar^2} \right)\left( 6 \pi^2 \frac{N}{V} \right)^{1/3}k_{\mathrm{B}} (k_{\mathrm{B}} T) \\ &= \left( \frac{m}{6 \hbar^2} \right)\left( \frac{V}{N} \right)^{2/3}\left( 6 \pi^2 \right)^{1/3}k_{\mathrm{B}} (k_{\mathrm{B}} T) \\ &= \left( \frac{ \pi^2 m}{3 \hbar^2} \right)\left( \frac{V}{\pi^2 N} \right)^{2/3}k_{\mathrm{B}} (k_{\mathrm{B}} T) \\ &= \left( \frac{ \pi^2 m}{\hbar^2} \right)\frac{\hbar^2}{2 m \epsilon_{\mathrm{F}}}k_{\mathrm{B}} (k_{\mathrm{B}} T),\end{aligned} \hspace{\stretch{1}}(1.0.13)

or

\begin{aligned}\boxed{\frac{C}{N} = \frac{\pi^2}{2} k_{\mathrm{B}} \frac{ k_{\mathrm{B}} T}{\epsilon_{\mathrm{F}}}.}\end{aligned} \hspace{\stretch{1}}(1.0.14)

This is illustrated in fig. 1.4.

Fig 1.4: Specific heat per Fermion

 

Relativisitic gas

  1. Relativisitic gas

    \begin{aligned}\epsilon_\mathbf{k} = \pm \hbar v \left\lvert {\mathbf{k}} \right\rvert.\end{aligned} \hspace{\stretch{1}}(1.0.15)

    \begin{aligned}\epsilon = \sqrt{(m_0 c^2)^2 + c^2 (\hbar \mathbf{k})^2}\end{aligned} \hspace{\stretch{1}}(1.0.16)

  2. graphene
  3. massless Dirac Fermion

    Fig 1.5: Relativisitic gas energy distribution

     

    We can think of this state distribution in a condensed matter view, where we can have a hole to electron state transition by supplying energy to the system (i.e. shining light on the substrate). This can also be thought of in a relativisitic particle view where the same state transition can be thought of as a positron electron pair transition. A round trip transition will have to supply energy like 2 m_0 c^2 as illustrated in fig. 1.6.

    Fig 1.6: Hole to electron round trip transition energy requirement

     

Graphene

Consider graphene, a 2D system. We want to determine the density of states N(\epsilon),

\begin{aligned}\int \frac{d^2 \mathbf{k}}{(2 \pi)^2} \rightarrow \int_{-\infty}^\infty d\epsilon N(\epsilon),\end{aligned} \hspace{\stretch{1}}(1.0.17)

We’ll find a density of states distribution like fig. 1.7.

Fig 1.7: Density of states for 2D linear energy momentum distribution

 

\begin{aligned}N(\epsilon) = \text{constant factor} \frac{\left\lvert {\epsilon} \right\rvert}{v},\end{aligned} \hspace{\stretch{1}}(1.0.18)

\begin{aligned}C \sim \frac{d}{dT} \int N(\epsilon) n_{\mathrm{F}}(\epsilon) \epsilon d\epsilon,\end{aligned} \hspace{\stretch{1}}(1.0.19)

\begin{aligned}\Delta E \sim \underbrace{T}_{\text{window}}\times\underbrace{T}_{\text{energy}}\times\underbrace{T}_{\text{number of states}}\sim T^3\end{aligned} \hspace{\stretch{1}}(1.0.20)

so that

\begin{aligned}C_{\mathrm{Dimensionless}} \sim T^2\end{aligned} \hspace{\stretch{1}}(1.0.21)

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , | Leave a Comment »

PHY452H1S Basic Statistical Mechanics. Lecture 17: Fermi gas thermodynamics. Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 26, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer

Peeter’s lecture notes from class. May not be entirely coherent.

Fermi gas thermodynamics

  • Energy was found to be

    \begin{aligned}\frac{E}{N} = \frac{3}{5} \epsilon_{\mathrm{F}}\qquad \text{where} \quad T = 0.\end{aligned} \hspace{\stretch{1}}(1.2.1)

  • Pressure was found to have the form fig. 1.1

    Fig 1.1: Pressure in Fermi gas

  • The chemical potential was found to have the form fig. 1.2.

    \begin{aligned}e^{\beta \mu} = \rho \lambda_{\mathrm{T}}^3\end{aligned} \hspace{\stretch{1}}(1.0.2a)

    \begin{aligned}\lambda_{\mathrm{T}} = \frac{h}{\sqrt{ 2 \pi m k_{\mathrm{B}} T}},\end{aligned} \hspace{\stretch{1}}(1.0.2b)

    so that the zero crossing is approximately when

    \begin{aligned}e^{\beta \times 0} = 1 = \rho \lambda_{\mathrm{T}}^3.\end{aligned} \hspace{\stretch{1}}(1.0.3)

    That last identification provides the relation T \sim  T_{\mathrm{F}}. FIXME: that bit wasn’t clear to me.

    Fig 1.2: Chemical potential in Fermi gas

How about at other temperatures?

  • \mu(T) = ?
  • E(T) = ?
  • C_{\mathrm{V}}(T) = ?

We had

\begin{aligned}N = \sum_k \frac{1}{{e^{\beta (\epsilon_k - \mu)} + 1}} = \sum_{\mathbf{k}} n_{\mathrm{F}}(\epsilon_\mathbf{k})\end{aligned} \hspace{\stretch{1}}(1.0.4)

\begin{aligned}E(T) =\sum_k \epsilon_\mathbf{k} n_{\mathrm{F}}(\epsilon_\mathbf{k}).\end{aligned} \hspace{\stretch{1}}(1.0.5)

FIXME: references to earlier sections where these were derived.

We can define a density of states

\begin{aligned}\sum_\mathbf{k} &= \sum_\mathbf{k} \int_{-\infty}^\infty d\epsilon  \delta(\epsilon  - \epsilon_\mathbf{k}) \\ &= \int_{-\infty}^\infty d\epsilon \sum_\mathbf{k}\delta(\epsilon  - \epsilon_\mathbf{k}),\end{aligned} \hspace{\stretch{1}}(1.0.6)

where the liberty to informally switch the order of differentiation and integration has been used. This construction allows us to write a more general sum

\begin{aligned}\sum_\mathbf{k} f(\epsilon_\mathbf{k}) &= \sum_\mathbf{k} \int_{-\infty}^\infty d\epsilon  \delta(\epsilon  - \epsilon_\mathbf{k}) f(\epsilon_\mathbf{k}) \\ &= \sum_\mathbf{k}\int_{-\infty}^\infty d\epsilon \delta(\epsilon  - \epsilon_\mathbf{k})f(\epsilon_\mathbf{k}) \\ &=\int_{-\infty}^\infty d\epsilon  f(\epsilon_\mathbf{k})\left( \sum_\mathbf{k} \delta(\epsilon  - \epsilon_\mathbf{k}) \right).\end{aligned} \hspace{\stretch{1}}(1.0.7)

This sum, evaluated using a continuum approximation, is

\begin{aligned}N(\epsilon ) &\equiv \sum_\mathbf{k}\delta(\epsilon  - \epsilon_\mathbf{k}) \\ &= \frac{V}{(2 \pi)^3} \int d^3 \mathbf{k} \delta\left( \epsilon  - \frac{\hbar^2 k^2}{2 m} \right) \\ &= \frac{V}{(2 \pi)^3} 4 \pi \int_0^\infty k^2 dk \delta\left( \epsilon  - \frac{\hbar^2 k^2}{2 m} \right)\end{aligned} \hspace{\stretch{1}}(1.0.8)

Using

\begin{aligned}\delta(g(x)) = \sum_{x_0} \frac{\delta(x - x_0)}{\left\lvert {g'(x_0)} \right\rvert},\end{aligned} \hspace{\stretch{1}}(1.0.9)

where the roots of g(x) are x_0, we have

\begin{aligned}N(\epsilon ) &= \frac{V}{(2 \pi)^3} 4 \pi \int_0^\infty k^2 dk \delta\left( k - \frac{\sqrt{2 m \epsilon }}{\hbar} \right)\frac{m \hbar }{ \hbar^2 \sqrt{2 m \epsilon }} \\ &= \frac{V}{(2 \pi)^3} 2 \pi \frac{2 m \epsilon }{\hbar^2}\frac{2 m \hbar }{ \hbar^2 \sqrt{2 m \epsilon }} \\ &= V \left( \frac{2 m}{\hbar^2} \right)^{3/2} \frac{1}{{4 \pi^2}} \sqrt{\epsilon }.\end{aligned} \hspace{\stretch{1}}(1.0.10)

In 2D this would be

\begin{aligned}N(\epsilon ) \sim  V \int dk k \delta \left( \epsilon  - \frac{\hbar^2 k^2}{2m} \right) = V \frac{\sqrt{2 m \epsilon }}{\hbar} \frac{m \hbar}{\hbar^2 \sqrt{ 2 m \epsilon }} \sim  V\end{aligned} \hspace{\stretch{1}}(1.0.11)

and in 1D

\begin{aligned}N(\epsilon ) &\sim  V \int dk \delta \left( \epsilon  - \frac{\hbar^2 k^2}{2m} \right) \\ &= V \frac{m \hbar}{\hbar^2 \sqrt{ 2 m \epsilon }} \\ &\sim  \frac{1}{{\sqrt{\epsilon }}}.\end{aligned} \hspace{\stretch{1}}(1.0.12)

What happens when we have linear energy momentum relationships?

Suppose that we have a linear energy momentum relationship like

\begin{aligned}\epsilon_\mathbf{k} = v \left\lvert {\mathbf{k}} \right\rvert.\end{aligned} \hspace{\stretch{1}}(1.0.13)

An example of such a relationship is the high velocity relation between the energy and momentum of a particle

\begin{aligned}\epsilon_\mathbf{k} = \sqrt{ m_0^2 c^4 + p^2 c^2 } \sim  \left\lvert {\mathbf{p}} \right\rvert c.\end{aligned} \hspace{\stretch{1}}(1.0.14)

Another example is graphene, a carbon structure of the form fig. 1.3. The energy and momentum for such a structure is related in roughly as shown in fig. 1.4, where

Fig 1.3: Graphene bond structure

 

Fig 1.4: Graphene energy momentum dependence

 

\begin{aligned}\epsilon_\mathbf{k} = \pm v_{\mathrm{F}} \left\lvert {\mathbf{k}} \right\rvert.\end{aligned} \hspace{\stretch{1}}(1.0.15)

Continuing with the 3D case we have

FIXME: Is this (or how is this) related to the linear energy momentum relationships for Graphene like substances?

\begin{aligned}N = V \int_0^\infty\underbrace{n_{\mathrm{F}}(\epsilon )}_{1/(e^{\beta (\epsilon  - \mu)} + 1)}\underbrace{N(\epsilon )}_{\epsilon ^{1/2}}\end{aligned} \hspace{\stretch{1}}(1.0.16)

\begin{aligned}\rho &= \frac{N}{V} \\ &= \left( \frac{2m}{\hbar^2 } \right)^{3/2} \frac{1}{{ 4 \pi^2}}\int_0^\infty d\epsilon  \frac{\epsilon ^{1/2}}{z^{-1} e^{\beta \epsilon } + 1} \\ &= \left( \frac{2m}{\hbar^2 } \right)^{3/2} \frac{1}{{ 4 \pi^2}}\left( k_{\mathrm{B}} T \right)^{3/2}\int_0^\infty dx \frac{x^{1/2}}{z^{-1} e^{x} + 1}\end{aligned} \hspace{\stretch{1}}(1.0.17)

where z = e^{\beta \mu} as usual, and we write x = \beta \epsilon . For the low temperature asymptotic behavior see [1] appendix section E. For z large it can be shown that this is

\begin{aligned}\int_0^\infty dx \frac{x^{1/2}}{z^{-1} e^{x} + 1}\approx \frac{2}{3}\left( \ln z \right)^{3/2}\left( 1 + \frac{\pi^2}{8} \frac{1}{{(\ln z)^2}} \right),\end{aligned} \hspace{\stretch{1}}(1.0.18)

so that

\begin{aligned}\rho &\approx  \left( \frac{2m}{\hbar^2 } \right)^{3/2} \frac{1}{{ 4 \pi^2}}\left( k_{\mathrm{B}} T \right)^{3/2}\frac{2}{3}\left( \ln z \right)^{3/2}\left( 1 + \frac{\pi^2}{8} \frac{1}{{(\ln z)^2}} \right) \\ &= \left( \frac{2m}{\hbar^2 } \right)^{3/2} \frac{1}{{ 4 \pi^2}}\frac{2}{3}\mu^{3/2}\left( 1 + \frac{\pi^2}{8} \frac{1}{{(\beta \mu)^2}} \right) \\ &= \left( \frac{2m}{\hbar^2 } \right)^{3/2} \frac{1}{{ 4 \pi^2}}\frac{2}{3}\mu^{3/2}\left( 1 + \frac{\pi^2}{8} \left( \frac{k_{\mathrm{B}} T}{\mu} \right)^2 \right) \\ &= \rho_{T = 0}\left( \frac{\mu}{ \epsilon_{\mathrm{F}} } \right)^{3/2}\left( 1 + \frac{\pi^2}{8} \left( \frac{k_{\mathrm{B}} T}{\mu} \right)^2 \right)\end{aligned} \hspace{\stretch{1}}(1.0.19)

Assuming a quadratic form for the chemical potential at low temperature as in fig. 1.5, we have

Fig 1.5: Assumed quadratic form for low temperature chemical potential

 

\begin{aligned}1 &= \left( \frac{\mu}{ \epsilon_{\mathrm{F}} } \right)^{3/2}\left( 1 + \frac{\pi^2}{8} \left( \frac{k_{\mathrm{B}} T}{\mu} \right)^2 \right) \\ &= \left( \frac{\epsilon_{\mathrm{F}} - a T^2}{ \epsilon_{\mathrm{F}} } \right)^{3/2}\left( 1 + \frac{\pi^2}{8} \left( \frac{k_{\mathrm{B}} T}{\epsilon_{\mathrm{F}} - a T^2} \right)^2 \right) \\ &\approx  \left( 1 - \frac{3}{2} a \frac{T^2}{\epsilon_{\mathrm{F}}} \right)\left( 1 + \frac{\pi^2}{8} \frac{(k_{\mathrm{B}} T)^2}{\epsilon_{\mathrm{F}}^2} \right) \\ &\approx  1 - \frac{3}{2} a \frac{T^2}{\epsilon_{\mathrm{F}}} + \frac{\pi^2}{8} \frac{(k_{\mathrm{B}} T)^2}{\epsilon_{\mathrm{F}}^2},\end{aligned} \hspace{\stretch{1}}(1.0.20)

or

\begin{aligned}a = \frac{\pi^2}{12} \frac{k_{\mathrm{B}}^2}{\epsilon_{\mathrm{F}}},\end{aligned} \hspace{\stretch{1}}(1.0.21)

We have used a Taylor expansion (1 + x)^n \approx  1 + n x for small x, for an end result of

\begin{aligned}\mu = \epsilon_{\mathrm{F}} - \frac{\pi^2}{12} \frac{(k_{\mathrm{B}} T)^2}{\epsilon_{\mathrm{F}}}.\end{aligned} \hspace{\stretch{1}}(1.0.22)

References

[1] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , | Leave a Comment »

Relativistic generalization of statistical mechanics

Posted by peeterjoot on March 22, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Motivation

I was wondering how to generalize the arguments of [1] to relativistic systems. Here’s a bit of blundering through the non-relativistic arguments of that text, tweaking them slightly.

I’m sure this has all been done before, but was a useful exercise to understand the non-relativistic arguments of Pathria better.

Generalizing from energy to four momentum

Generalizing the arguments of section 1.1.

Instead of considering that the total energy of the system is fixed, it makes sense that we’d have to instead consider the total four-momentum of the system fixed, so if we have N particles, we have a total four momentum

\begin{aligned}P = \sum_i n_i P_i = \sum n_i \left( \epsilon_i/c, \mathbf{p}_i \right),\end{aligned} \hspace{\stretch{1}}(1.2.1)

where n_i is the total number of particles with four momentum P_i. We can probably expect that the n_i‘s in this relativistic system will be smaller than those in a non-relativistic system since we have many more states when considering that we can have both specific energies and specific momentum, and the combinatorics of those extra degrees of freedom. However, we’ll still have

\begin{aligned}N = \sum_i n_i.\end{aligned} \hspace{\stretch{1}}(1.2.2)

Only given a specific observer frame can these these four-momentum components \left( \epsilon_i/c, \mathbf{p}_i \right) be expressed explicitly, as in

\begin{aligned}\epsilon_i = \gamma_i m_i c^2\end{aligned} \hspace{\stretch{1}}(1.0.3a)

\begin{aligned}\mathbf{p}_i = \gamma_i m \mathbf{v}_i\end{aligned} \hspace{\stretch{1}}(1.0.3b)

\begin{aligned}\gamma_i = \frac{1}{{\sqrt{1 - \mathbf{v}_i^2/c^2}}},\end{aligned} \hspace{\stretch{1}}(1.0.3c)

where \mathbf{v}_i is the velocity of the particle in that observer frame.

Generalizing the number if microstates, and notion of thermodynamic equilibrium

Generalizing the arguments of section 1.2.

We can still count the number of all possible microstates, but that number, denoted \Omega(N, V, E), for a given total energy needs to be parameterized differently. First off, any given volume is observer dependent, so we likely need to map

\begin{aligned}V \rightarrow \int d^4 x = \int dx^0 \wedge dx^1 \wedge dx^2 \wedge dx^3.\end{aligned} \hspace{\stretch{1}}(1.0.4)

Let’s still call this V, but know that we mean this to be four volume element, bounded in both space and time, referred to a fixed observer’s frame. So, lets write the total number of microstates as

\begin{aligned}\Omega(N, V, P) = \Omega \left( N, \int d^4 x, E/c, P^1, P^2, P^3 \right),\end{aligned} \hspace{\stretch{1}}(1.0.5)

where P = ( E/c, \mathbf{P} ) is the total four momentum of the system. If we have a system subdivided into to two systems in contact as in fig. 1.1, where the two systems have total four momentum P_1 and P_2 respectively.

Fig 1.1: Two physical systems in thermal contact

 

In the text the total energy of both systems was written

\begin{aligned}E^{(0)} = E_1 + E_2,\end{aligned} \hspace{\stretch{1}}(1.0.6)

so we’ll write

\begin{aligned}{P^{(0)}}^\mu = P_1^\mu + P_2^\mu = \text{constant},\end{aligned} \hspace{\stretch{1}}(1.0.7)

so that the total number of microstates of the combined system is now

\begin{aligned}\Omega^{(0)}(P_1, P_2) = \Omega_1(P_1) \Omega_2(P_2).\end{aligned} \hspace{\stretch{1}}(1.0.8)

As before, if \bar{{P}}^\mu_i denotes an equilibrium value of P_i^\mu, then maximizing eq. 1.0.8 requires all the derivatives (no sum over \mu here)

\begin{aligned}\left({\partial {\Omega_1(P_1)}}/{\partial {P^\mu_1}}\right)_{{P_1 = \bar{{P_1}}}}\Omega_2(\bar{{P}}_2)+\Omega_1(\bar{{P}}_1)\left({\partial {\Omega_2(P_2)}}/{\partial {P^\mu}}\right)_{{P_2 = \bar{{P_2}}}}\times\frac{\partial {P_2^\mu}}{\partial {P_1^\mu}}= 0.\end{aligned} \hspace{\stretch{1}}(1.0.9)

With each of the components of the total four-momentum P^\mu_1 + P^\mu_2 separately constant, we have {\partial {P_2^\mu}}/{\partial {P_1^\mu}} = -1, so that we have

\begin{aligned}\left({\partial {\ln \Omega_1(P_1)}}/{\partial {P^\mu_1}}\right)_{{P_1 = \bar{{P_1}}}}=\left({\partial {\ln \Omega_2(P_2)}}/{\partial {P^\mu}}\right)_{{P_2 = \bar{{P_2}}}},\end{aligned} \hspace{\stretch{1}}(1.0.10)

as before. However, we now have one such identity for each component of the total four momentum P which has been held constant. Let’s now define

\begin{aligned}\beta_\mu \equiv \left({\partial {\ln \Omega(N, V, P)}}/{\partial {P^\mu}}\right)_{{N, V, P = \bar{{P}}}},\end{aligned} \hspace{\stretch{1}}(1.0.11)

Our old scalar temperature is then

\begin{aligned}\beta_0 = c \left({\partial {\ln \Omega(N, V, P)}}/{\partial {E}}\right)_{{N, V, P = \bar{{P}}}} = c \beta = \frac{c}{k_{\mathrm{B}} T},\end{aligned} \hspace{\stretch{1}}(1.0.12)

but now we have three additional such constants to figure out what to do with. A first start would be figuring out how the Boltzmann probabilities should be generalized.

Equilibrium between a system and a heat reservoir

Generalizing the arguments of section 3.1.

As in the text, let’s consider a very large heat reservoir A' and a subsystem A as in fig. 1.2 that has come to a state of mutual equilibrium. This likely needs to be defined as a state in which the four vector \beta_\mu is common, as opposed to just \beta_0 the temperature field being common.

Fig 1.2: A system A immersed in heat reservoir A’

 

If the four momentum of the heat reservoir is P_r' with P_r for the subsystem, and

\begin{aligned}P_r + P_r' = P^{(0)} = \text{constant}.\end{aligned} \hspace{\stretch{1}}(1.0.13)

Writing

\begin{aligned}\Omega'({P^\mu_r}') = \Omega'(P^{(0)} - {P^\mu_r}) \propto P_r,\end{aligned} \hspace{\stretch{1}}(1.0.14)

for the number of microstates in the reservoir, so that a Taylor expansion of the logarithm around P_r' = P^{(0)} (with sums implied) is

\begin{aligned}\ln \Omega'({P^\mu_r}') = \ln \Omega'({P^{(0)}}) +\left({\partial {\ln \Omega'}}/{\partial {{P^\mu}'}}\right)_{{P' = P^{(0)}}} \left( P^{(0)} - P^\mu \right)\approx\text{constant} - \beta_\mu' P^\mu.\end{aligned} \hspace{\stretch{1}}(1.0.15)

Here we’ve inserted the definition of \beta^\mu from eq. 1.0.11, so that at equilibrium, with \beta_\mu' = \beta_\mu, we obtain

\begin{aligned}\Omega'({P^\mu_r}') = \exp\left( - \beta_\mu P^\mu \right)=\exp\left( - \beta E \right)\exp\left( - \beta_1 P^1 \right)\exp\left( - \beta_2 P^3 \right)\exp\left( - \beta_3 P^3 \right).\end{aligned} \hspace{\stretch{1}}(1.0.16)

Next steps

This looks consistent with the outline provided in http://physics.stackexchange.com/a/4950/3621 by Lubos to the stackexchange “is there a relativistic quantum thermodynamics” question. I’m sure it wouldn’t be too hard to find references that explore this, as well as explain why non-relativistic stat mech can be used for photon problems. Further exploration of this should wait until after the studies for this course are done.

References

[1] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , | Leave a Comment »

Kittel Zipper problem

Posted by peeterjoot on March 20, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Question: Zipper problem ([1] pr 3.7)

A zipper has N links; each link has a state in which it is closed with energy 0 and a state in which it is open with energy \epsilon. we require, however, that the zipper can only unzip from the left end, and that the link number s can only open if all links to the left (1, 2, \cdots, s - 1) are already open. Find (and sum) the partition function. In the low temperature limit k_{\mathrm{B}} T \ll \epsilon, find the average number of open links. The model is a very simplified model of the unwinding of two-stranded DNA molecules.

Answer

The system is depicted in fig. 1.1, in the E = 0 and E = \epsilon states.

Fig 1.1: Zipper molecule model in first two states

 

The left opening only constraint simplifies the combinatorics, since this restricts the available energies for the complete molecule to 0, \epsilon, 2 \epsilon, \cdots, N \epsilon.

The probability of finding the molecule with s links open is then

\begin{aligned}P_s =\frac{e^{- \beta s \epsilon}}{Z},\end{aligned} \hspace{\stretch{1}}(1.0.1)

with

\begin{aligned}Z = \sum_{s = 0}^N \frac{e^{- \beta s \epsilon}}{Z}.\end{aligned} \hspace{\stretch{1}}(1.0.2)

We can sum this geometric series immediately

\begin{aligned}\boxed{Z =\frac{e^{-\beta (N+1) \epsilon} - 1}{e^{-\beta \epsilon } - 1}.}\end{aligned} \hspace{\stretch{1}}(1.0.3)

The expectation value for the number of links is

\begin{aligned}\left\langle{{s}}\right\rangle &= \sum_{s = 0}^N s P_s \\ &= \frac{1}{{Z}} \sum_{s = 1}^N s e^{- \beta s \epsilon} \\ &= -\frac{1}{{Z}} \frac{\partial {}}{\partial {(\beta \epsilon)}} \sum_{s = 1}^N e^{- \beta s \epsilon}.\end{aligned} \hspace{\stretch{1}}(1.0.4)

Let’s write

\begin{aligned}a = e^{-\beta \epsilon},\end{aligned} \hspace{\stretch{1}}(1.0.5)

and make a change of variables

\begin{aligned}-\frac{\partial {}}{\partial {(\beta \epsilon)}} &= \frac{\partial {}}{\partial {\ln a}} \\ &= \frac{\partial {a}}{\partial {\ln a}}\frac{\partial {}}{\partial {a}} \\ &= \frac{\partial {e^{-\beta \epsilon}}}{\partial {(-\beta \epsilon)}}\frac{\partial {}}{\partial {a}} \\ &= a\frac{\partial {}}{\partial {a}}\end{aligned} \hspace{\stretch{1}}(1.0.6)

so that

\begin{aligned}-\frac{\partial {}}{\partial {\ln a}} \sum_{s = 1}^N a^s &= a \frac{d}{da} \left( \frac{a^{N+1} - a}{a - 1} \right) \\ &= a\left( \frac{(N+1) a^N - 1}{a - 1} - \frac{a^{N+1} - a} { (a - 1)^2 } \right) \\ &= \frac{a}{(a-1)^2}\left( \left( (N+1) a^N - 1 \right) (a - 1) - a^{N+1} + a \right) \\ &= \frac{a}{(a-1)^2}\left( N a^{N+1} -(N+1) a^N + 1 \right) \\ &= \frac{a}{(a-1)^2}\left( a^N ( N (a - 1) - 1 ) + 1 \right).\end{aligned} \hspace{\stretch{1}}(1.0.7)

The average number of links is thus

\begin{aligned}\left\langle{{k}}\right\rangle = \frac{a - 1}{a^{N+1} - 1}\frac{a}{(a-1)^2}\left( a^N ( N (a - 1) - 1 ) + 1 \right),\end{aligned} \hspace{\stretch{1}}(1.0.8)

or

\begin{aligned}\boxed{\left\langle{{k}}\right\rangle = \frac{1}{1 - e^{-\beta \epsilon(N+1)} }\frac{1}{e^{\beta \epsilon} - 1}\left( e^{-\beta \epsilon N} ( N (e^{-\beta \epsilon} - 1) - 1 ) + 1 \right).}\end{aligned} \hspace{\stretch{1}}(1.0.9)

In the very low temperature limit where \beta \epsilon \gg 1 (small T, big \beta), we have

\begin{aligned}\left\langle{{k}}\right\rangle \approx\frac{1}{e^{\beta \epsilon}}= e^{-\beta \epsilon},\end{aligned} \hspace{\stretch{1}}(1.0.10)

showing that on average no links are open at such low temperatures. An exact plot of \left\langle{{s}}\right\rangle for a few small N values is in fig. 1.2.

Fig 1.2: Average number of open links

 

References

[1] C. Kittel and H. Kroemer. Thermal physics. WH Freeman, 1980.

Posted in Math and Physics Learning. | Tagged: , , , | 2 Comments »

Pathria chapter 4 diatomic molecule problem

Posted by peeterjoot on March 18, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Question: Diatomic molecule ([1] pr 4.7)

Consider a classical system of non-interacting, diatomic molecules enclosed in a box of volume V at temperature T. The Hamiltonian of a single molecule is given by

\begin{aligned}H(\mathbf{r}_1, \mathbf{r}_2, \mathbf{p}_1, \mathbf{p}_2) = \frac{1}{{2m}} \left( \mathbf{p}_1^2 + \mathbf{p}_2^2  \right)+\frac{1}{{2}} K \left\lvert {\mathbf{r}_1 - \mathbf{r}_2} \right\rvert^2.\end{aligned} \hspace{\stretch{1}}(1.0.1)

Study the thermodynamics of this system, including the dependence of the quantity \left\langle{{r_{12}^2}}\right\rangle on T.

Answer

Partition function
First consider the partition function for a single diatomic pair

\begin{aligned}Z_1 &= \frac{1}{{h^6}} \int d^6 \mathbf{p} d^6 \mathbf{r} e^{-\beta \frac{ \mathbf{p}_1^2 + \mathbf{p}_2^2 }{2m}} e^{-\beta K\frac{ \left\lvert {\mathbf{r}_1 - \mathbf{r}_2} \right\rvert^2 }{2}} \\ &= \frac{1}{{h^6}} \left( \frac{2 \pi m}{\beta} \right)^{6/2}\int d^3 \mathbf{r}_1 d^3 \mathbf{r}_2 e^{-\beta K\frac{ \left\lvert {\mathbf{r}_1 - \mathbf{r}_2} \right\rvert^2 }{2}}\end{aligned} \hspace{\stretch{1}}(1.0.2)

Now we can make a change of variables to simplify the exponential. Let’s write

\begin{aligned}\mathbf{u} = \mathbf{r}_1 - \mathbf{r}_2\end{aligned} \hspace{\stretch{1}}(1.0.3a)

\begin{aligned}\mathbf{v} = \mathbf{r}_2,\end{aligned} \hspace{\stretch{1}}(1.0.3b)

or

\begin{aligned}\mathbf{r}_2 = \mathbf{v}\end{aligned} \hspace{\stretch{1}}(1.0.4a)

\begin{aligned}\mathbf{r}_1=\mathbf{u} + \mathbf{v}.\end{aligned} \hspace{\stretch{1}}(1.0.4b)

Our volume element is

\begin{aligned}d^3 \mathbf{r}_1 d^3 \mathbf{r}_2 = d^3 \mathbf{u} d^3 \mathbf{v} \frac{\partial(\mathbf{r}_1, \mathbf{r}_2)}{\partial(\mathbf{u}, \mathbf{v})}.\end{aligned} \hspace{\stretch{1}}(1.0.5)

It wasn’t obvious to me that this change of variables preserves the volume element, but a quick Jacobian calculation shows this to be the case

\begin{aligned}\frac{\partial(\mathbf{r}_1, \mathbf{r}_2)}{\partial(\mathbf{u}, \mathbf{v})} &= \begin{vmatrix}\partial r_{11}/\partial u_1 & \partial r_{11}/\partial u_2 &\partial r_{11}/\partial u_3 &\partial r_{11}/\partial v_1 &\partial r_{11}/\partial v_2 &\partial r_{11}/\partial v_3 \\ \partial r_{12}/\partial u_1 & \partial r_{12}/\partial u_2 &\partial r_{12}/\partial u_3 &\partial r_{12}/\partial v_1 &\partial r_{12}/\partial v_2 &\partial r_{12}/\partial v_3 \\ \partial r_{13}/\partial u_1 & \partial r_{13}/\partial u_2 &\partial r_{13}/\partial u_3 &\partial r_{13}/\partial v_1 &\partial r_{13}/\partial v_2 &\partial r_{13}/\partial v_3 \\ \partial r_{21}/\partial u_1 & \partial r_{21}/\partial u_2 &\partial r_{21}/\partial u_3 &\partial r_{21}/\partial v_1 &\partial r_{21}/\partial v_2 &\partial r_{21}/\partial v_3 \\ \partial r_{22}/\partial u_1 & \partial r_{22}/\partial u_2 &\partial r_{22}/\partial u_3 &\partial r_{22}/\partial v_1 &\partial r_{22}/\partial v_2 &\partial r_{22}/\partial v_3 \\ \partial r_{23}/\partial u_1 & \partial r_{23}/\partial u_2 &\partial r_{23}/\partial u_3 &\partial r_{23}/\partial v_1 &\partial r_{23}/\partial v_2 &\partial r_{23}/\partial v_3 \end{vmatrix} \\ &= \begin{vmatrix}\partial r_{11}/\partial u_1 & \partial r_{11}/\partial u_2 &\partial r_{11}/\partial u_3 &\partial r_{11}/\partial v_1 &\partial r_{11}/\partial v_2 &\partial r_{11}/\partial v_3 \\ \partial r_{12}/\partial u_1 & \partial r_{12}/\partial u_2 &\partial r_{12}/\partial u_3 &\partial r_{12}/\partial v_1 &\partial r_{12}/\partial v_2 &\partial r_{12}/\partial v_3 \\ \partial r_{13}/\partial u_1 & \partial r_{13}/\partial u_2 &\partial r_{13}/\partial u_3 &\partial r_{13}/\partial v_1 &\partial r_{13}/\partial v_2 &\partial r_{13}/\partial v_3 \\ 0 & 0 & 0 &\partial r_{21}/\partial v_1 &\partial r_{21}/\partial v_2 &\partial r_{21}/\partial v_3 \\ 0 & 0 & 0 &\partial r_{22}/\partial v_1 &\partial r_{22}/\partial v_2 &\partial r_{22}/\partial v_3 \\ 0 & 0 & 0 &\partial r_{23}/\partial v_1 &\partial r_{23}/\partial v_2 &\partial r_{23}/\partial v_3 \end{vmatrix} \\ &= 1.\end{aligned} \hspace{\stretch{1}}(1.0.6)

Our remaining integral can now be evaluated

\begin{aligned}\int d^3 \mathbf{r}_1 d^3 \mathbf{r}_2 e^{-\beta K\frac{ \left\lvert {\mathbf{r}_1 - \mathbf{r}_2} \right\rvert^2 }{2}}  &= \int d^3 \mathbf{u} d^3 \mathbf{v} e^{-\beta K \left\lvert {\mathbf{u}} \right\rvert^2 /2 } \\ &= V \int d^3 \mathbf{u} e^{-\beta K \left\lvert {\mathbf{u}} \right\rvert^2 /2 } \\ &= V \int d^3 \mathbf{u} e^{-\beta K \left\lvert {\mathbf{u}} \right\rvert^2 /2 } \\ &= V \left( \frac{ 2 \pi }{ K \beta }  \right)^{3/2}.\end{aligned} \hspace{\stretch{1}}(1.0.7)

Our partition function is now completely evaluated

\begin{aligned}Z_1 = V\frac{1}{{h^6}} \left( \frac{2 \pi m}{\beta} \right)^{3}\left( \frac{ 2 \pi }{ K \beta }  \right)^{3/2}.\end{aligned} \hspace{\stretch{1}}(1.0.8)

As a function of V and T as in the text, we write

\begin{aligned}Z_1 = V f(T)\end{aligned} \hspace{\stretch{1}}(1.0.9a)

\begin{aligned}f(T) = \left( \frac{m }{h^2 } \sqrt{\frac{(2\pi)^3}{K}}  \right)^3\left( k_{\mathrm{B}} T \right)^{9/2}.\end{aligned} \hspace{\stretch{1}}(1.0.9b)

Gibbs sum

Our Gibbs sum, summing over the number of molecules (not atoms), is

\begin{aligned}Z_{\mathrm{G}} &= \sum_{N_r = 0}^\infty \frac{z^{N_r}}{N_r!} Z_1^{N_r} \\ &= e^{ z V f(T) },\end{aligned} \hspace{\stretch{1}}(1.0.10)

or

\begin{aligned}q &= \ln Z_{\mathrm{G}} \\ &= z V f(T) \\ &= P V \beta.\end{aligned} \hspace{\stretch{1}}(1.0.11)

The fact that we can sum this as an exponential series so nicely looks like it’s one of the main advantages to this grand partition function (Gibbs sum). We can avoid any of the large N! approximations that we have to use when the number of particles is explicitly fixed.

Pressure

The pressure follows

\begin{aligned}P &= z f(T) k_{\mathrm{B}} T \\ &= e^{\mu/k_{\mathrm{B}} T}\left( \frac{m }{h^2 } \sqrt{\frac{(2\pi)^3}{K}}  \right)^3\left( k_{\mathrm{B}} T \right)^{11/2}.\end{aligned} \hspace{\stretch{1}}(1.0.12)

Average energy

\begin{aligned}\left\langle{{H}}\right\rangle &= -\frac{\partial {q}}{\partial {\beta}} \\ &= - z V \frac{9}{2} \frac{f(T)}{T} \frac{\partial {T}}{\partial {\beta}} \\ &= z V \frac{9}{2} \frac{f(T)}{T^3} \frac{1}{{k_{\mathrm{B}}}},\end{aligned} \hspace{\stretch{1}}(1.0.13)

or

\begin{aligned}\left\langle{{H}}\right\rangle = e^{\mu/k_{\mathrm{B}} T} V \frac{9}{2} k_{\mathrm{B}}^2 \left( \frac{m }{h^2 } \sqrt{\frac{(2\pi)^3}{K}}  \right)^3\left( k_{\mathrm{B}} T \right)^{3/2}.\end{aligned} \hspace{\stretch{1}}(1.0.14)

Average occupancy

\begin{aligned}\left\langle{{N}}\right\rangle &= z \frac{\partial {}}{\partial {z}} \ln Z_{\mathrm{G}} \\ &= z \frac{\partial {}}{\partial {z}} \left( z V f(T)  \right) \\ &= z V f(T)\end{aligned} \hspace{\stretch{1}}(1.0.15)

but this is just q, or

\begin{aligned}\left\langle{{N}}\right\rangle &= e^{\mu/k_{\mathrm{B}} T} V\left( \frac{m }{h^2 } \sqrt{\frac{(2\pi)^3}{K}}  \right)^3\left( k_{\mathrm{B}} T \right)^{9/2}.\end{aligned} \hspace{\stretch{1}}(1.0.16)

Free energy

\begin{aligned}F &= - k_{\mathrm{B}} T \ln \frac{ Z_{\mathrm{G}} }{z^N} \\ &= - k_{\mathrm{B}} T \left( q - N \ln z  \right) \\ &= N k_{\mathrm{B}} T \beta \mu - k_{\mathrm{B}} T q \\ &= z V f(T) \mu - k_{\mathrm{B}} T z V f(T) \\ &= z V f(T) \left( \mu - k_{\mathrm{B}} T  \right)\end{aligned} \hspace{\stretch{1}}(1.0.17)

\begin{aligned}F = e^{\mu/k_{\mathrm{B}} T} V \left( \mu - k_{\mathrm{B}} T  \right)\left( \frac{m }{h^2 } \sqrt{\frac{(2\pi)^3}{K}}  \right)^3\left( k_{\mathrm{B}} T \right)^{9/2}.\end{aligned} \hspace{\stretch{1}}(1.0.18)

Entropy

\begin{aligned}S &= \frac{U - F}{T} \\ &= \frac{V}{T} e^{\mu/k_{\mathrm{B}} T} \left( k_{\mathrm{B}} T \right)^{3/2}\left( \frac{m }{h^2 } \sqrt{\frac{(2\pi)^3}{K}}  \right)^3\left( \frac{9}{2} k_{\mathrm{B}}^2 - \left( \mu - k_{\mathrm{B}} T  \right) \left( k_{\mathrm{B}} T \right)^3  \right).\end{aligned} \hspace{\stretch{1}}(1.0.19)

Expectation of atomic separation

The momentum portions of the average will just cancel out, leaving just

\begin{aligned}\left\langle{r_{12}^2}\right\rangle &= \frac{\int d^3 \mathbf{r}_1 d^3 \mathbf{r}_2 \left( \mathbf{r}_1 - \mathbf{r}_2 \right)^2 e^{-\beta K \left( \mathbf{r}_1 - \mathbf{r}_2 \right)^2 /2 }}{\int d^3 \mathbf{r}_1 d^3 \mathbf{r}_2 e^{-\beta K \left( \mathbf{r}_1 - \mathbf{r}_2 \right)^2 /2 }} \\ &= \frac{ \int d^3 \mathbf{u} \mathbf{u}^2 e^{-\beta K \mathbf{u}^2 /2 }}{\int d^3 \mathbf{u} e^{-\beta K \mathbf{u}^2 /2 }} \\ &= \frac{\int da db dc \left( a^2 + b^2 + c^2 \right) e^{-\beta K \left( a^2 + b^2 + c^2 \right) /2}}{\int e^{-\beta K \left( a^2 + b^2 + c^2 \right)/2}} \\ &= 3 \frac{\int da a^2 e^{-\beta K a^2/2}\int db dc e^{-\beta K \left( b^2 + c^2 \right) /2}}{\int e^{-\beta K \left( a^2 + b^2 + c^2 \right)/2 }} \\ &= 3 \frac{\int da a^2 e^{-\beta K a^2/2}}{\int e^{-\beta K a^2/2}}\end{aligned} \hspace{\stretch{1}}(1.0.20)

Expanding the numerator by parts we have

\begin{aligned}\int da a^2 e^{-\beta K a^2/2} \\ &= \int a d\frac{ e^{-\beta K a^2/2}}{- 2 \beta K/2} \\ &= \frac{1}{\beta K}\int e^{-\beta K a^2/2}.\end{aligned} \hspace{\stretch{1}}(1.0.21)

This gives us

\begin{aligned}\boxed{\left\langle r_{12}^2 \right\rangle = \frac{3}{\beta K} = \frac{3 k_{\mathrm{B}} T}{K}.}\end{aligned} \hspace{\stretch{1}}(1.0.22)

References

[1] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | Leave a Comment »

open system variance of N

Posted by peeterjoot on March 16, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Question: Variance of N in open system ([1] pr 3.14)

Show that for an open system

\begin{aligned}\text{var}(N) = \frac{1}{{\beta}} \left({\partial {\bar{N}}}/{\partial {\mu}}\right)_{{V, T}}.\end{aligned} \hspace{\stretch{1}}(1.0.1)

Answer

In terms of the grand partition function, we find the (scaled) average number of particles

\begin{aligned}\frac{\partial {}}{\partial {\mu}} \ln Z_{\mathrm{G}} &= \frac{\partial {}}{\partial {\mu}} \ln \sum_{r,s} e^{\beta \mu N_r - \beta E_s} \\ &= \frac{1}{{Z_{\mathrm{G}}}} \sum_{r,s} \beta N_r e^{\beta \mu N_r - \beta E_s} \\ &= \beta \bar{N}.\end{aligned} \hspace{\stretch{1}}(1.0.2)

Our second derivative provides us a scaled variance

\begin{aligned}\frac{\partial^2 {{}}}{\partial {{\mu}}^2} \ln Z_{\mathrm{G}} &= \frac{\partial {}}{\partial {\mu}} \left( \frac{1}{{Z_{\mathrm{G}}}} \sum_{r,s} \beta N_r e^{\beta \mu N_r - \beta E_s}  \right) \\ &= \frac{1}{{Z_{\mathrm{G}}}} \sum_{r,s} (\beta N_r)^2 e^{\beta \mu N_r - \beta E_s}-\frac{1}{{Z_{\mathrm{G}}^2}} \left( \sum_{r,s} \beta N_r e^{\beta \mu N_r - \beta E_s} \right)^2 \\ &= \beta^2 \left( \bar{N^2} - {\bar{N}}^2  \right)\end{aligned} \hspace{\stretch{1}}(1.0.3)

Together this gives us the desired result

\begin{aligned}\text{var}(N) &= \frac{1}{{\beta^2}}\frac{\partial {}}{\partial {\mu}} \left( \beta \bar{N}  \right) \\ &= \frac{1}{{\beta}}\frac{\partial {\bar{N}}}{\partial {\mu}}.\end{aligned} \hspace{\stretch{1}}(1.0.4)

References

[1] E.A. Jackson. Equilibrium statistical mechanics. Dover Pubns, 2000.

Posted in Math and Physics Learning. | Tagged: , , , , , , | Leave a Comment »

probability forms of entropy

Posted by peeterjoot on March 16, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Question: Entropy as probability

[1] points out that entropy can be written as

\begin{aligned}S = - k_{\mathrm{B}} \sum_i P_i \ln P_i\end{aligned} \hspace{\stretch{1}}(1.0.1)

where

\begin{aligned}P_i = \frac{e^{-\beta E_i}}{Z}\end{aligned} \hspace{\stretch{1}}(1.0.2a)

\begin{aligned}Z = \sum_i e^{-\beta E_i}.\end{aligned} \hspace{\stretch{1}}(1.0.2b)

Show that this follows from the free energy F = U - T S = -k_{\mathrm{B}} \ln Z.

Answer

In terms of the free and average energies, we have

\begin{aligned}\frac{S}{k_{\mathrm{B}}} &= \frac{U - F}{k_{\mathrm{B}} T} \\ &=   \beta \left( -\frac{\partial {\ln Z}}{\partial {\beta}} \right)   - \beta \left( -k_{\mathrm{B}} T \ln Z \right) \\ &= \frac{\sum_i \beta E_i e^{-\beta E_i}}{Z}  +\ln Z \\ &= -\sum_i P_i \ln e^{-\beta E_i} + \sum_i P_i \ln Z \\ &= -\sum_i P_i \ln \frac{e^{-\beta E_i}}{Z} P_i \\ &= -\sum_i P_i \ln P_i.\end{aligned} \hspace{\stretch{1}}(1.0.3)

Question: Entropy in terms of grand partition probabilites ( [2] pr 4.1)

Generalize \cref{pr:entropyProbabilityForm:1} to the grand canonical scheme, where we have

\begin{aligned}P_{r, s} = \frac{e^{-\alpha N_r - \beta E_s}}{Z_{\mathrm{G}}}\end{aligned} \hspace{\stretch{1}}(1.0.4a)

\begin{aligned}Z_{\mathrm{G}} = \sum_{r,s} e^{-\alpha N_r - \beta E_s}\end{aligned} \hspace{\stretch{1}}(1.0.4b)

\begin{aligned}z = e^{-\alpha} = e^{\mu \beta}\end{aligned} \hspace{\stretch{1}}(1.0.4c)

\begin{aligned}q = \ln Z_{\mathrm{G}},\end{aligned} \hspace{\stretch{1}}(1.0.4d)

and show

\begin{aligned}S = - k_{\mathrm{B}} \sum_{r,s} P_{r,s} \ln P_{r,s}.\end{aligned} \hspace{\stretch{1}}(1.0.5)

Answer

With

\begin{aligned}\beta P V = q,\end{aligned} \hspace{\stretch{1}}(1.0.6)

the free energy takes the form

\begin{aligned}F = N \mu - P V = N \mu - q/\beta,\end{aligned} \hspace{\stretch{1}}(1.0.7)

so that the entropy (scaled by k_{\mathrm{B}}) leads us to the desired result

\begin{aligned}\frac{S}{k_{\mathrm{B}}} &= \beta U - N \mu \beta + q/(\beta k_{\mathrm{B}} T) \\ &= -\beta \frac{\partial {q}}{\partial {\beta}} - z \mu \beta \frac{\partial {q}}{\partial {z}} + q \\ &= \frac{1}{{Z_{\mathrm{G}}}}\sum_{r, s}\left( -\beta (-E_s) - \mu \beta N_r  \right) e^{-\alpha N_r - \beta E_s}+ \ln Z_{\mathrm{G}} \\ &= \sum_{r, s} \ln e^{ \alpha N_r + \beta E_s } P_{r,s} + \left( \sum_{r, s} P_{r, s}  \right)\ln Z_{\mathrm{G}} \\ &= -\sum_{r, s} \ln \frac{e^{ -\alpha N_r - \beta E_s }}{Z_{\mathrm{G}}} P_{r,s} \\ &= -\sum_{r, s} P_{r, s} \ln P_{r, s}\end{aligned} \hspace{\stretch{1}}(1.0.8)

References

[1] E.A. Jackson. Equilibrium statistical mechanics. Dover Pubns, 2000.

[2] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , | Leave a Comment »