Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Posts Tagged ‘Central limit theorem’

Final version of my phy452.pdf notes posted

Posted by peeterjoot on September 5, 2013

I’d intended to rework the exam problems over the summer and make that the last update to my stat mech notes. However, I ended up studying world events and some other non-mainstream ideas intensively over the summer, and never got around to that final update.

Since I’m starting a new course (condensed matter) soon, I’ll end up having to focus on that, and have now posted a final version of my notes as is.

Since the last update the following additions were made

September 05, 2013 Large volume fermi gas density

May 30, 2013 Bernoulli polynomials and numbers and Euler-MacLauren summation

May 09, 2013 Bose gas specific heat above condensation temperature

May 09, 2013 A dumb expansion of the Fermi-Dirac grand partition function

April 30, 2013 Ultra relativistic spin zero condensation temperature

April 30, 2013 Summary of statistical mechanics relations and helpful formulas

April 24, 2013 Low temperature Fermi gas chemical potential

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Summary of statistical mechanics relations and helpful formulas (cheat sheet fodder)

Posted by peeterjoot on April 29, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Central limit theorem

If \left\langle{{x}}\right\rangle = \mu and \sigma^2 = \left\langle{{x^2}}\right\rangle - \left\langle{{x}}\right\rangle^2, and X = \sum x, then in the limit

\begin{aligned}\lim_{N \rightarrow \infty} P(X)= \frac{1}{{\sigma \sqrt{2 \pi N}}} \exp\left( - \frac{ (x - N \mu)^2}{2 N \sigma^2} \right)\end{aligned} \hspace{\stretch{1}}(1.0.1a)

\begin{aligned}\left\langle{{X}}\right\rangle = N \mu\end{aligned} \hspace{\stretch{1}}(1.0.1b)

\begin{aligned}\left\langle{{X^2}}\right\rangle - \left\langle{{X}}\right\rangle^2 = N \sigma^2\end{aligned} \hspace{\stretch{1}}(1.0.1c)

Binomial distribution

\begin{aligned}P_N(X) = \left\{\begin{array}{l l}\left(\frac{1}{{2}}\right)^N \frac{N!}{\left(\frac{N-X}{2}\right)!\left(\frac{N+X}{2}\right)!}& \quad \mbox{if X and N have same parity} \\ 0 & \quad \mbox{otherwise} \end{array},\right.\end{aligned} \hspace{\stretch{1}}(1.0.2)

where X was something like number of Heads minus number of Tails.

Generating function

Given the Fourier transform of a probability distribution \tilde{P}(k) we have

\begin{aligned}{\left.{{ \frac{\partial^n}{\partial k^n}    \tilde{P}(k) }}\right\vert}_{{k = 0}}= (-i)^n \left\langle{{x^n}}\right\rangle\end{aligned} \hspace{\stretch{1}}(1.0.2)

Handy mathematics

\begin{aligned}\ln( 1 + x ) = x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4}\end{aligned} \hspace{\stretch{1}}(1.0.2)

\begin{aligned}N! \approx \sqrt{ 2 \pi N} N^N e^{-N}\end{aligned} \hspace{\stretch{1}}(1.0.5)

\begin{aligned}\ln N! \approx \frac{1}{{2}} \ln 2 \pi -N + \left( N + \frac{1}{{2}}  \right)\ln N \approx N \ln N - N\end{aligned} \hspace{\stretch{1}}(1.0.6)

\begin{aligned}\text{erf}(z) = \frac{2}{\sqrt{\pi}} \int_0^z e^{-t^2} dt\end{aligned} \hspace{\stretch{1}}(1.0.7)

\begin{aligned}\Gamma(\alpha) = \int_0^\infty dy e^{-y} y^{\alpha - 1}\end{aligned} \hspace{\stretch{1}}(1.0.8)

\begin{aligned}\Gamma(\alpha + 1) = \alpha \Gamma(\alpha)\end{aligned} \hspace{\stretch{1}}(1.0.9)

\begin{aligned}\Gamma\left( 1/2 \right) = \sqrt{\pi}\end{aligned} \hspace{\stretch{1}}(1.0.10)

\begin{aligned}\zeta(s) = \sum_{k=1}^{\infty} k^{-s}\end{aligned} \hspace{\stretch{1}}(1.0.10)

\begin{aligned}\begin{aligned}\zeta(3/2) &\approx 2.61238 \\ \zeta(2) &\approx 1.64493 \\ \zeta(5/2) &\approx 1.34149 \\ \zeta(3) &\approx 1.20206\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.12)

\begin{aligned}\Gamma(z) \Gamma(1-z) = \frac{\pi}{\sin(\pi z)}\end{aligned} \hspace{\stretch{1}}(1.0.12)

\begin{aligned}P(x, t) = \int_{-\infty}^\infty \frac{dk}{2 \pi} \tilde{P}(k, t) \exp\left( i k x \right)\end{aligned} \hspace{\stretch{1}}(1.0.14a)

\begin{aligned}\tilde{P}(k, t) = \int_{-\infty}^\infty dx P(x, t) \exp\left( -i k x \right)\end{aligned} \hspace{\stretch{1}}(1.0.14b)

Heavyside theta

\begin{aligned}\Theta(x) = \left\{\begin{array}{l l}1 & \quad x \ge 0 \\ 0 & \quad x < 0\end{array}\right.\end{aligned} \hspace{\stretch{1}}(1.0.15a)

\begin{aligned}\frac{d\Theta}{dx} = \delta(x)\end{aligned} \hspace{\stretch{1}}(1.0.15b)

\begin{aligned}\sum_{m = -l}^l a^m=\frac{a^{l + 1/2} - a^{-(l+1/2)}}{a^{1/2} - a^{-1/2}}\end{aligned} \hspace{\stretch{1}}(1.0.16.16)

\begin{aligned}\sum_{m = -l}^l e^{b m}=\frac{\sinh(b(l + 1/2))}{\sinh(b/2)}\end{aligned} \hspace{\stretch{1}}(1.0.16b)

\begin{aligned}\int_{-\infty}^\infty q^{2 N} e^{-a q^2} dq=\frac{(2 N - 1)!!}{(2a)^N} \sqrt{\frac{\pi}{a}}\end{aligned} \hspace{\stretch{1}}(1.0.17.17)

\begin{aligned}\int_{-\infty}^\infty e^{-a q^2} dq=\sqrt{\frac{\pi}{a}}\end{aligned} \hspace{\stretch{1}}(1.0.17.17)

\begin{aligned}\binom{-\left\lvert {m} \right\rvert}{k} = (-1)^k \frac{\left\lvert {m} \right\rvert}{\left\lvert {m} \right\rvert + k} \binom{\left\lvert {m} \right\rvert+k}{\left\lvert {m} \right\rvert}\end{aligned} \hspace{\stretch{1}}(1.0.18)

\begin{aligned}\int_0^\infty d\epsilon \frac{\epsilon^3}{e^{\beta \epsilon} - 1} =\frac{\pi ^4}{15 \beta ^4},\end{aligned} \hspace{\stretch{1}}(1.0.18)

volume in mD

\begin{aligned}V_m= \frac{ \pi^{m/2} R^{m} }{   \Gamma\left( m/2 + 1 \right)}\end{aligned} \hspace{\stretch{1}}(1.0.20)

area of ellipse

\begin{aligned}A = \pi a b\end{aligned} \hspace{\stretch{1}}(1.0.21)

Radius of gyration of a 3D polymer

With radius a, we have

\begin{aligned}r_N \approx a \sqrt{N}\end{aligned} \hspace{\stretch{1}}(1.0.21)

Velocity random walk

Find

\begin{aligned}\mathcal{P}_{N_{\mathrm{c}}}(\mathbf{v}) \propto e^{-\frac{(\mathbf{v} - \mathbf{v}_0)^2}{2 N_{\mathrm{c}}}}\end{aligned} \hspace{\stretch{1}}(1.0.23)

Random walk

1D Random walk

\begin{aligned}\mathcal{P}( x, t ) = \frac{1}{{2}} \mathcal{P}(x + \delta x, t - \delta t)+\frac{1}{{2}} \mathcal{P}(x - \delta x, t - \delta t)\end{aligned} \hspace{\stretch{1}}(1.0.23)

leads to

\begin{aligned}\frac{\partial {\mathcal{P}}}{\partial {t}}(x, t) =\frac{1}{{2}} \frac{(\delta x)^2}{\delta t}\frac{\partial^2 {{\mathcal{P}}}}{\partial {{x}}^2}(x, t) = D \frac{\partial^2 {{\mathcal{P}}}}{\partial {{x}}^2}(x, t) = -\frac{\partial {J}}{\partial {x}},\end{aligned} \hspace{\stretch{1}}(1.0.25)

The diffusion constant relation to the probability current is referred to as Fick’s law

\begin{aligned}D = -\frac{\partial {J}}{\partial {x}}\end{aligned} \hspace{\stretch{1}}(1.0.25)

with which we can cast the probability diffusion identity into a continuity equation form

\begin{aligned}\frac{\partial {\mathcal{P}}}{\partial {t}} + \frac{\partial {J}}{\partial {x}} = 0 \end{aligned} \hspace{\stretch{1}}(1.0.25)

In 3D (with the Maxwell distribution frictional term), this takes the form

\begin{aligned}\mathbf{j} = -D \boldsymbol{\nabla}_\mathbf{v} c(\mathbf{v}, t) - \eta \mathbf{v} c(\mathbf{v}, t)\end{aligned} \hspace{\stretch{1}}(1.0.28a)

\begin{aligned}\frac{\partial {}}{\partial {t}} c(\mathbf{v}, t) + \boldsymbol{\nabla}_\mathbf{v} \cdot \mathbf{j}(\mathbf{v}, t) = 0\end{aligned} \hspace{\stretch{1}}(1.0.28b)

Maxwell distribution

Add a frictional term to the velocity space diffusion current

\begin{aligned}j_v = -D \frac{\partial {c}}{\partial {v}}(v, t) - \eta v c(v).\end{aligned} \hspace{\stretch{1}}(1.0.29)

For steady state the continity equation 0 = \frac{dc}{dt} = -\frac{\partial {j_v}}{\partial {v}} leads to

\begin{aligned}c(v) \propto \exp\left(- \frac{\eta v^2}{2 D}\right).\end{aligned} \hspace{\stretch{1}}(1.0.30)

We also find

\begin{aligned}\left\langle{{v^2}}\right\rangle = \frac{D}{\eta},\end{aligned} \hspace{\stretch{1}}(1.0.30)

and identify

\begin{aligned}\frac{1}{{2}} m \left\langle{{\mathbf{v}^2}}\right\rangle = \frac{1}{{2}} m \left( \frac{D}{\eta} \right) = \frac{1}{{2}} k_{\mathrm{B}} T\end{aligned} \hspace{\stretch{1}}(1.0.32)

Hamilton’s equations

\begin{aligned}\frac{\partial {H}}{\partial {p}} = \dot{x}\end{aligned} \hspace{\stretch{1}}(1.0.33a)

\begin{aligned}\frac{\partial {H}}{\partial {x}} = -\dot{p}\end{aligned} \hspace{\stretch{1}}(1.0.33b)

SHO

\begin{aligned}H = \frac{p^2}{2m} + \frac{1}{{2}} k x^2\end{aligned} \hspace{\stretch{1}}(1.0.34a)

\begin{aligned}\omega^2 = \frac{k}{m}\end{aligned} \hspace{\stretch{1}}(1.0.34b)

Quantum energy eigenvalues

\begin{aligned}E_n = \left( n + \frac{1}{{2}}  \right) \hbar \omega\end{aligned} \hspace{\stretch{1}}(1.0.35)

Liouville’s theorem

\begin{aligned}\frac{d{{\rho}}}{dt} = \frac{\partial {\rho}}{\partial {t}} + \dot{x} \frac{\partial {\rho}}{\partial {x}} + \dot{p} \frac{\partial {\rho}}{\partial {p}}=  \cdots  = \frac{\partial {\rho}}{\partial {t}} + \frac{\partial {\left( \dot{x} \rho \right)}}{\partial {x}} + \frac{\partial {\left( \dot{x} \rho \right)}}{\partial {p}} = \frac{\partial {\rho}}{\partial {t}} + \boldsymbol{\nabla}_{x,p} \cdot (\rho \dot{x}, \rho \dot{p})= \frac{\partial {\rho}}{\partial {t}} + \boldsymbol{\nabla} \cdot \mathbf{J}= 0,\end{aligned} \hspace{\stretch{1}}(1.0.35)

Regardless of whether we have a steady state system, if we sit on a region of phase space volume, the probability density in that neighbourhood will be constant.

Ergodic

A system for which all accessible phase space is swept out by the trajectories. This and Liouville’s threorm allows us to assume that we can treat any given small phase space volume as if it is equally probable to the same time evolved phase space region, and switch to ensemble averaging instead of time averaging.

Thermodynamics

\begin{aligned}dE = T dS - P dV + \mu dN\end{aligned} \hspace{\stretch{1}}(1.0.37.37)

\begin{aligned}\frac{1}{{T}} = \left({\partial {S}}/{\partial {E}}\right)_{{N,V}}\end{aligned} \hspace{\stretch{1}}(1.0.37.37)

\begin{aligned}\frac{P}{T} = \left({\partial {S}}/{\partial {V}}\right)_{{N,E}}\end{aligned} \hspace{\stretch{1}}(1.0.37.37)

\begin{aligned}-\frac{\mu}{T} = \left({\partial {S}}/{\partial {N}}\right)_{{V,E}}\end{aligned} \hspace{\stretch{1}}(1.0.37.37)

\begin{aligned}P = - \left({\partial {E}}/{\partial {V}}\right)_{{N,S}}= - \left({\partial {F}}/{\partial {V}}\right)_{{N,T}}\end{aligned} \hspace{\stretch{1}}(1.0.37e)

\begin{aligned}\mu = \left({\partial {E}}/{\partial {N}}\right)_{{V,S}} = \left({\partial {F}}/{\partial {N}}\right)_{{V,T}}\end{aligned} \hspace{\stretch{1}}(1.0.37e)

\begin{aligned}T = \left({\partial {E}}/{\partial {S}}\right)_{{N,V}}\end{aligned} \hspace{\stretch{1}}(1.0.37e)

\begin{aligned}F = E - TS\end{aligned} \hspace{\stretch{1}}(1.0.37e)

\begin{aligned}G = F + P V = E - T S + P V = \mu N\end{aligned} \hspace{\stretch{1}}(1.0.37i)

\begin{aligned}H = E + P V = G + T S\end{aligned} \hspace{\stretch{1}}(1.0.37j)

\begin{aligned}C_{\mathrm{V}} = T \left({\partial {S}}/{\partial {T}}\right)_{{N,V}} = \left({\partial {E}}/{\partial {T}}\right)_{{N,V}} = - T \left( \frac{\partial^2 {{F}}}{\partial {{T}}^2}  \right)_{N,V}\end{aligned} \hspace{\stretch{1}}(1.0.37k)

\begin{aligned}C_{\mathrm{P}} = T \left({\partial {S}}/{\partial {T}}\right)_{{N,P}} = \left({\partial {H}}/{\partial {T}}\right)_{{N,P}}\end{aligned} \hspace{\stretch{1}}(1.0.37l)

\begin{aligned}\underbrace{dE}_{\text{Change in energy}}=\underbrace{d W}_{\text{work done on the system}}+\underbrace{d Q}_{\text{Heat supplied to the system}}\end{aligned} \hspace{\stretch{1}}(1.0.38)

Example (work on gas): d W = -P dV. Adiabatic: d Q = 0. Cyclic: dE = 0.

Microstates

\begin{aligned}\beta = \frac{1}{k_{\mathrm{B}} T}\end{aligned} \hspace{\stretch{1}}(1.0.38)

\begin{aligned}S = k_{\mathrm{B}} \ln \Omega \end{aligned} \hspace{\stretch{1}}(1.0.40)

\begin{aligned}\Omega(N, V, E) = \frac{1}{h^{3N} N!} \int_V d\mathbf{x}_1  \cdots  d\mathbf{x}_N \int d\mathbf{p}_1  \cdots  d\mathbf{p}_N \delta \left(E - \frac{\mathbf{p}_1^2}{2 m} \cdots - \frac{\mathbf{p}_N^2}{2 m}\right)=\frac{V^N}{h^{3N} N!}\int d\mathbf{p}_1  \cdots d\mathbf{p}_N \delta \left(E - \frac{\mathbf{p}_1^2}{2m} \cdots - \frac{\mathbf{p}_N^2}{2m}\right)\end{aligned} \hspace{\stretch{1}}(1.0.40)

\begin{aligned}\Omega = \frac{d\gamma}{dE}\end{aligned} \hspace{\stretch{1}}(1.0.42)

\begin{aligned}\gamma=\frac{V^N}{h^{3N} N!}\int d\mathbf{p}_1  \cdots d\mathbf{p}_N \Theta \left(E - \frac{\mathbf{p}_1^2}{2m} \cdots - \frac{\mathbf{p}_N^2}{2m}\right)\end{aligned} \hspace{\stretch{1}}(1.0.43)

quantum

\begin{aligned}\gamma = \sum_i \Theta(E - \epsilon_i)\end{aligned} \hspace{\stretch{1}}(1.0.44)

Ideal gas

\begin{aligned}\Omega = \frac{V^N}{N!} \frac{1}{{h^{3N}}} \frac{( 2 \pi m E)^{3 N/2 }}{E} \frac{1}{\Gamma( 3N/2 ) }\end{aligned} \hspace{\stretch{1}}(1.0.45)

\begin{aligned}S_{\mathrm{ideal}} = k_{\mathrm{B}} \left(N \ln \frac{V}{N} + \frac{3 N}{2} \ln \left( \frac{4 \pi m E }{3 N h^2}  \right) + \frac{5 N}{2} \right)\end{aligned} \hspace{\stretch{1}}(1.0.46)

Quantum free particle in a box

\begin{aligned}\Psi_{n_1, n_2, n_3}(x, y, z) = \left( \frac{2}{L} \right)^{3/2} \sin\left( \frac{ n_1 \pi x}{L}  \right)\sin\left( \frac{ n_2 \pi x}{L}  \right)\sin\left( \frac{ n_3 \pi x}{L}  \right)\end{aligned} \hspace{\stretch{1}}(1.0.47a)

\begin{aligned}\epsilon_{n_1, n_2, n_3} = \frac{h^2}{8 m L^2} \left( n_1^2 + n_2^2 + n_3^2  \right)\end{aligned} \hspace{\stretch{1}}(1.0.47b)

\begin{aligned}\epsilon_k = \frac{\hbar^2 k^2}{2m},\end{aligned} \hspace{\stretch{1}}(1.0.47b)

Spin

magnetization

\begin{aligned}\mu = \frac{\partial {F}}{\partial {B}}\end{aligned} \hspace{\stretch{1}}(1.0.48)

moment per particle

\begin{aligned}m = \mu/N\end{aligned} \hspace{\stretch{1}}(1.0.49)

spin matrices

\begin{aligned}\sigma_x = \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.50a)

\begin{aligned}\sigma_y = \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.50b)

\begin{aligned}\sigma_z = \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.50c)

l \ge 0, -l \le m \le l

\begin{aligned}\mathbf{L}^2 {\left\lvert {lm} \right\rangle} = l(l+1)\hbar^2 {\left\lvert {lm} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.51a)

\begin{aligned}L_z {\left\lvert {l m} \right\rangle} = \hbar m {\left\lvert {l m} \right\rangle}\end{aligned} \hspace{\stretch{1}}(1.0.51b)

spin addition

\begin{aligned}S(S + 1) \hbar^2\end{aligned} \hspace{\stretch{1}}(1.0.51b)

Canonical ensemble

classical

\begin{aligned}\Omega(N, E) = \frac{ V }{ h^3 N} \int d\mathbf{p}_1 e^{\frac{S}{k_{\mathrm{B}}}(N, E)}e^{-\frac{1}{{k_{\mathrm{B}}}} \left( \frac{\partial {S}}{\partial {N}} \right)_{E, V} }e^{-\frac{\mathbf{p}_1^2}{2m k_{\mathrm{B}}}\left( \frac{\partial {S}}{\partial {E}} \right)_{N, V}}\end{aligned} \hspace{\stretch{1}}(1.0.53)

quantum

\begin{aligned}\Omega(E) \approx\sum_{m \in \text{subsystem}} e^{\frac{1}{{k_{\mathrm{B}}}} S(E)}e^{-\beta \mathcal{E}_m}\end{aligned} \hspace{\stretch{1}}(1.0.54.54)

\begin{aligned}Z = \sum_m e^{-\beta \mathcal{E}_m} = \text{Tr} \left( e^{-\beta \hat{H}_{\text{subsystem}}}  \right)\end{aligned} \hspace{\stretch{1}}(1.0.54b)

\begin{aligned}\left\langle{{E}}\right\rangle = \frac{\int He^{- \beta H }}{\int e^{- \beta H }}\end{aligned} \hspace{\stretch{1}}(1.0.55a)

\begin{aligned}\left\langle{{E^2}}\right\rangle = \frac{\int H^2e^{- \beta H }}{\int e^{- \beta H }}\end{aligned} \hspace{\stretch{1}}(1.0.55b)

\begin{aligned}Z \equiv \frac{1}{{h^{3N} N!}}\int e^{- \beta H }\end{aligned} \hspace{\stretch{1}}(1.0.55c)

\begin{aligned}\left\langle{{E}}\right\rangle = -\frac{1}{{Z}} \frac{\partial {Z}}{\partial {\beta}} = - \frac{\partial {\ln Z}}{\partial {\beta}} =\frac{\partial {(\beta F)}}{\partial {\beta}}\end{aligned} \hspace{\stretch{1}}(1.0.55d)

\begin{aligned}\sigma_{\mathrm{E}}^2= \left\langle{{E^2}}\right\rangle - \left\langle{{E}}\right\rangle^2 =\frac{\partial^2 {{\ln Z}}}{\partial {{\beta}}^2} = k_{\mathrm{B}} T^2 \frac{\partial {\left\langle{{E}}\right\rangle}}{\partial {T}}= k_{\mathrm{B}} T^2 C_{\mathrm{V}} \propto N\end{aligned} \hspace{\stretch{1}}(1.0.55e)

\begin{aligned}Z = e^{-\beta (\left\langle{{E}}\right\rangle - T S) } = e^{-\beta F}\end{aligned} \hspace{\stretch{1}}(1.0.55f)

\begin{aligned}F = \left\langle{{E}}\right\rangle - T S = -k_{\mathrm{B}} T \ln Z\end{aligned} \hspace{\stretch{1}}(1.0.55g)

Grand Canonical ensemble

\begin{aligned}S = - k_{\mathrm{B}} \sum_{r,s} P_{r,s} \ln P_{r,s}\end{aligned} \hspace{\stretch{1}}(1.0.56)

\begin{aligned}P_{r, s} = \frac{e^{-\alpha N_r - \beta E_s}}{Z_{\mathrm{G}}}\end{aligned} \hspace{\stretch{1}}(1.0.57a)

\begin{aligned}Z_{\mathrm{G}} = \sum_{r,s} e^{-\alpha N_r - \beta E_s} = \sum_{r,s} z^{N_r} e^{-\beta E_s} = \sum_{N_r} z^{N_r} Z_{N_r}\end{aligned} \hspace{\stretch{1}}(1.0.57b)

\begin{aligned}z = e^{-\alpha} = e^{\mu \beta}\end{aligned} \hspace{\stretch{1}}(1.0.57c)

\begin{aligned}q = \ln Z_{\mathrm{G}} = P V \beta\end{aligned} \hspace{\stretch{1}}(1.0.57d)

\begin{aligned}\left\langle{{H}}\right\rangle = -\left({\partial {q}}/{\partial {\beta}}\right)_{{z,V}} = k_{\mathrm{B}} T^2 \left({\partial {q}}/{\partial {\mu}}\right)_{{z,V}} = \sum_\epsilon \frac{\epsilon}{z^{-1} e^{\beta \epsilon} \pm 1}\end{aligned} \hspace{\stretch{1}}(1.0.57e)

\begin{aligned}\left\langle{{N}}\right\rangle = z \left({\partial {q}}/{\partial {z}}\right)_{{V,T}} = \sum_\epsilon \frac{1}{{z^{-1} e^{\beta\epsilon} \pm 1}}\end{aligned} \hspace{\stretch{1}}(1.0.57f)

\begin{aligned}F = - k_{\mathrm{B}} T \ln \frac{ Z_{\mathrm{G}} }{z^N}\end{aligned} \hspace{\stretch{1}}(1.0.57g)

\begin{aligned}\left\langle{{n_\epsilon}}\right\rangle = -\frac{1}{{\beta}} \left({\partial {q}}/{\partial {\epsilon}}\right)_{{z, T, \text{other} \epsilon}} = \frac{1}{{z^{-1} e^{\beta \epsilon} \pm 1}}\end{aligned} \hspace{\stretch{1}}(1.0.57h)

\begin{aligned}\text{var}(N) = \frac{1}{{\beta}} \left({\partial {\left\langle{{N}}\right\rangle}}/{\partial {\mu}}\right)_{{V, T}} = - \frac{1}{{\beta}} \left({\partial {\left\langle{{n_\epsilon}}\right\rangle}}/{\partial {\epsilon}}\right)_{{z,T}} = z^{-1} e^{\beta \epsilon}\end{aligned} \hspace{\stretch{1}}(1.0.57h)

\begin{aligned}\mathcal{P} \propto e^{\frac{\mu}{k_{\mathrm{B}} T} N_S}e^{-\frac{E_S}{k_{\mathrm{B}} T} }\end{aligned} \hspace{\stretch{1}}(1.0.59.59)

\begin{aligned}Z_{\mathrm{G}}= \sum_{N=0}^\infty e^{\beta \mu N}\sum_{n_k, \sum n_m = N} e^{-\beta \sum_m n_m \epsilon_m}=\prod_{k} \left( \sum_{n_k} e^{-\beta(\epsilon_k - \mu) n_k} \right)\end{aligned} \hspace{\stretch{1}}(1.0.59b)

\begin{aligned}Z_{\mathrm{G}}^{\mathrm{QM}} = {\text{Tr}}_{\{\text{energy}, N\}} \left( e^{ -\beta (\hat{H} - \mu \hat{N} ) }  \right)\end{aligned} \hspace{\stretch{1}}(1.0.59b)

\begin{aligned}P V = \frac{2}{3} U\end{aligned} \hspace{\stretch{1}}(1.0.60a)

\begin{aligned}f_\nu^\pm(z) = \frac{1}{{\Gamma(\nu)}} \int_0^\infty dx \frac{x^{\nu - 1}}{z^{-1} e^x \pm 1}\end{aligned} \hspace{\stretch{1}}(1.0.60a)

\begin{aligned}f_\nu^\pm(z \approx 0) =z\mp\frac{z^{2}}{2^\nu}+\frac{z^{3}}{3^\nu}\mp\frac{z^{4}}{4^\nu}+  \cdots \end{aligned} \hspace{\stretch{1}}(1.0.60a)

\begin{aligned}z \frac{d f_\nu^{\pm}(z) }{dz} = f_{\nu-1}^{\pm}(z)\end{aligned} \hspace{\stretch{1}}(1.0.61)

\begin{aligned}\frac{d f_{3/2}^{\pm}(z) }{dT} = -\frac{3}{2T} f_{3/2}^{\pm}(z)f_{\nu-1}^{\pm}(z)\end{aligned} \hspace{\stretch{1}}(1.0.62)

Fermions

\begin{aligned}\sum_{n_k = 0}^1 e^{-\beta(\epsilon_k - \mu) n_k}=1 + e^{-\beta(\epsilon_k - \mu)}\end{aligned} \hspace{\stretch{1}}(1.0.62)

\begin{aligned}N = (2 S + 1) V \int_0^{k_{\mathrm{F}}} \frac{4 \pi k^2 dk}{(2 \pi)^3}\end{aligned} \hspace{\stretch{1}}(1.0.64)

\begin{aligned}k_{\mathrm{F}} = \left( \frac{ 6 \pi^2 \rho }{2 S + 1} \right)^{1/3}\end{aligned} \hspace{\stretch{1}}(1.0.65.65)

\begin{aligned}\epsilon_{\mathrm{F}} = \frac{\hbar^2}{2m} \left( \frac{6 \pi \rho}{2 S + 1} \right)^{2/3}\end{aligned} \hspace{\stretch{1}}(1.0.65.65)

\begin{aligned}\mu = \epsilon_{\mathrm{F}} - \frac{\pi^2}{12} \frac{(k_{\mathrm{B}} T)^2}{\epsilon_{\mathrm{F}}} +  \cdots \end{aligned} \hspace{\stretch{1}}(1.0.65.65)

\begin{aligned}\lambda \equiv \frac{h}{\sqrt{2 \pi m k_{\mathrm{B}} T}}\end{aligned} \hspace{\stretch{1}}(1.0.65.65)

\begin{aligned}\frac{N}{V}=\frac{g}{\lambda^3} f_{3/2}(z)=\frac{g}{\lambda^3} \left( e^{\beta \mu} - \frac{e^{2 \beta \mu}}{2^{3/2}} +  \cdots   \right) \end{aligned} \hspace{\stretch{1}}(1.0.68)

(so n = \frac{g}{\lambda^3} e^{\beta \mu} for large temperatures)

\begin{aligned}P \beta = \frac{g}{\lambda^3} f_{5/2}(z)\end{aligned} \hspace{\stretch{1}}(1.0.69a)

\begin{aligned}U= \frac{3}{2} N k_{\mathrm{B}} T \frac{f_{5/2}(z)}{f_{3/2}(z) }.\end{aligned} \hspace{\stretch{1}}(1.0.69a)

\begin{aligned}f_\nu^+(e^y) \approx\frac{y^\nu}{\Gamma(\nu + 1)}\left( 1 + 2 \nu \sum_{j = 1, 3, 5,  \cdots } (\nu-1)  \cdots (\nu - j) \left( 1 - 2^{-j} \right) \frac{\zeta(j+1)}{ y^{j + 1} }  \right)\end{aligned} \hspace{\stretch{1}}(1.0.69a)

\begin{aligned}\frac{C}{N} = \frac{\pi^2}{2} k_{\mathrm{B}} \frac{ k_{\mathrm{B}} T}{\epsilon_{\mathrm{F}}}\end{aligned} \hspace{\stretch{1}}(1.0.71.71)

\begin{aligned}A = N k_{\mathrm{B}} T \left( \ln z - \frac{f_{5/2}(z)}{f_{3/2}(z)}  \right)\end{aligned} \hspace{\stretch{1}}(1.0.71.71)

Bosons

\begin{aligned}Z_{\mathrm{G}} = \prod_\epsilon \frac{1}{{ 1 - z e^{-\beta \epsilon} }}\end{aligned} \hspace{\stretch{1}}(1.0.72)

\begin{aligned}P \beta = \frac{1}{{\lambda^3}} g_{5/2}(z)\end{aligned} \hspace{\stretch{1}}(1.0.73)

\begin{aligned}U = \frac{3}{2} k_{\mathrm{B}} T \frac{V}{\lambda^3} g_{5/2}(z)\end{aligned} \hspace{\stretch{1}}(1.0.74)

\begin{aligned}N_e = N - N_0 = N \left( \frac{T}{T_c}  \right)^{3/2}\end{aligned} \hspace{\stretch{1}}(1.0.75)

For T < T_c, z = 1.

\begin{aligned}g_\nu(1) = \zeta(\nu).\end{aligned} \hspace{\stretch{1}}(1.0.76)

\begin{aligned}\sum_{n_k = 0}^\infty e^{-\beta(\epsilon_k - \mu) n_k} =\frac{1}{{1 - e^{-\beta(\epsilon_k - \mu)}}}\end{aligned} \hspace{\stretch{1}}(1.0.76)

\begin{aligned}f_\nu^-( e^{-\alpha} ) = \frac{ \Gamma(1 - \nu)}{ \alpha^{1 - \nu} } +  \cdots \end{aligned} \hspace{\stretch{1}}(1.0.76)

\begin{aligned}\rho \lambda^3 = g_{3/2}(z) \le \zeta(3/2) \approx 2.612\end{aligned} \hspace{\stretch{1}}(1.0.79.79)

\begin{aligned}k_{\mathrm{B}} T_{\mathrm{c}} = \left( \frac{\rho}{\zeta(3/2)}  \right)^{2/3} \frac{ 2 \pi \hbar^2}{m}\end{aligned} \hspace{\stretch{1}}(1.0.79.79)

BEC

\begin{aligned}\rho= \rho_{\mathbf{k} = 0}+ \frac{1}{{\lambda^3}} g_{3/2}(z)\end{aligned} \hspace{\stretch{1}}(1.0.80.80)

\begin{aligned}\rho_0 = \rho \left(1 - \left( \frac{T}{T_{\mathrm{c}}}  \right)^{3/2}\right)\end{aligned} \hspace{\stretch{1}}(1.0.80b)

\begin{aligned}\frac{E}{V} \propto \left( k_{\mathrm{B}} T \right)^{5/2}\end{aligned} \hspace{\stretch{1}}(1.0.81.81)

\begin{aligned}\frac{C}{V} \propto \left( k_{\mathrm{B}} T \right)^{3/2}\end{aligned} \hspace{\stretch{1}}(1.0.81.81)

\begin{aligned}\frac{S}{N k_{\mathrm{B}}} = \frac{5}{2} \frac{g_{5/2}}{g_{3/2}} - \ln z \Theta(T - T_c)\end{aligned} \hspace{\stretch{1}}(1.0.81.81)

Density of states

Low velocities

\begin{aligned}N_1(\epsilon)=V \frac{m \hbar}{\hbar^2 \sqrt{ 2 m \epsilon}}\end{aligned} \hspace{\stretch{1}}(1.0.82a)

\begin{aligned}N_2(\epsilon)=V \frac{m}{\hbar^2}\end{aligned} \hspace{\stretch{1}}(1.0.82b)

\begin{aligned}N_3(\epsilon)=V \left( \frac{2 m}{\hbar^2} \right)^{3/2} \frac{1}{{4 \pi^2}} \sqrt{\epsilon}\end{aligned} \hspace{\stretch{1}}(1.0.82c)

relativistic

\begin{aligned}\mathcal{D}_1(\epsilon)=\frac{2 L}{ c h } \frac{ \sqrt{ \epsilon^2 - \left( m c^2  \right)^2} }{\epsilon}\end{aligned} \hspace{\stretch{1}}(1.0.83.83)

\begin{aligned}\mathcal{D}_2(\epsilon)=\frac{2 \pi A}{ (c h)^2 } \frac{ \epsilon^2 - \left( m c^2  \right)^2 }{ \epsilon }\end{aligned} \hspace{\stretch{1}}(1.0.83.83)

\begin{aligned}\mathcal{D}_3(\epsilon)=\frac{4 \pi V}{ (c h)^3 } \frac{\left(	\epsilon^2 - \left( m c^2  \right)^2 \right)^{3/2}}{\epsilon}\end{aligned} \hspace{\stretch{1}}(1.0.83.83)

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

An updated compilation of notes, for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 27, 2013

Here’s my second update of my notes compilation for this course, including all of the following:

March 27, 2013 Fermi gas

March 26, 2013 Fermi gas thermodynamics

March 26, 2013 Fermi gas thermodynamics

March 23, 2013 Relativisitic generalization of statistical mechanics

March 21, 2013 Kittel Zipper problem

March 18, 2013 Pathria chapter 4 diatomic molecule problem

March 17, 2013 Gibbs sum for a two level system

March 16, 2013 open system variance of N

March 16, 2013 probability forms of entropy

March 14, 2013 Grand Canonical/Fermion-Bosons

March 13, 2013 Quantum anharmonic oscillator

March 12, 2013 Grand canonical ensemble

March 11, 2013 Heat capacity of perturbed harmonic oscillator

March 10, 2013 Langevin small approximation

March 10, 2013 Addition of two one half spins

March 10, 2013 Midterm II reflection

March 07, 2013 Thermodynamic identities

March 06, 2013 Temperature

March 05, 2013 Interacting spin

plus everything detailed in the description of my first update and before.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment »

PHY452H1S Basic Statistical Mechanics. Problem Set 5: Temperature

Posted by peeterjoot on March 10, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer

This is an ungraded set of answers to the problems posed.

Question: Polymer stretching – “entropic forces” (2013 problem set 5, p1)

Consider a toy model of a polymer in one dimension which is made of N steps (amino acids) of unit length, going left or right like a random walk. Let one end of this polymer be at the origin and the other end be at a point X = \sqrt{N} (viz. the rms size of the polymer) , so 1 \ll X \ll N. We have previously calculated the number of configurations corresponding to this condition (approximate the binomial distribution by a Gaussian).

Part a

Using this, find the entropy of this polymer as S = k_{\mathrm{B}} \ln \Omega. The free energy of this polymer, even in the absence of any other interactions, thus has an entropic contribution, F = -T S. If we stretch this polymer, we expect to have fewer available configurations, and thus a smaller entropy and a higher free energy.

Part b

Find the change in free energy of this polymer if we stretch this polymer from its end being at X to a larger distance X + \Delta X.

Part c

Show that the change in free energy is linear in the displacement for small \Delta X, and hence find the temperature dependent “entropic spring constant” of this polymer. (This entropic force is important to overcome for packing DNA into the nucleus, and in many biological processes.)

Typo correction (via email):
You need to show that the change in free energy is quadratic in the displacement \Delta X, not linear in \Delta X. The force is linear in \Delta X. (Exactly as for a “spring”.)

Answer

Entropy.

In lecture 2 probabilities for the sums of fair coin tosses were considered. Assigning \pm 1 to the events Y_k for heads and tails coin tosses respectively, a random variable Y = \sum_k Y_k for the total of N such events was found to have the form

\begin{aligned}P_N(Y) = \left\{\begin{array}{l l}\left(\frac{1}{{2}}\right)^N \frac{N!}{\left(\frac{N-Y}{2}\right)!\left(\frac{N+Y}{2}\right)!}& \quad \mbox{if Y and N have same parity} \\ 0& \quad \mbox{otherwise} \end{array}\right.\end{aligned} \hspace{\stretch{1}}(1.1.1)

For an individual coin tosses we have averages \left\langle{{Y_1}}\right\rangle = 0, and \left\langle{{Y_1^2}}\right\rangle = 1, so the central limit theorem provides us with a large N Gaussian approximation for this distribution

\begin{aligned}P_N(Y) \approx\frac{2}{\sqrt{2 \pi N}} \exp\left( -\frac{Y^2}{2N} \right).\end{aligned} \hspace{\stretch{1}}(1.1.2)

This fair coin toss problem can also be thought of as describing the coordinate of the end point of a one dimensional polymer with the beginning point of the polymer is fixed at the origin. Writing \Omega(N, Y) for the total number of configurations that have an end point at coordinate Y we have

\begin{aligned}P_N(Y) = \frac{\Omega(N, Y)}{2^N},\end{aligned} \hspace{\stretch{1}}(1.1.3)

From this, the total number of configurations that have, say, length X = \left\lvert {Y} \right\rvert, in the large N Gaussian approximation, is

\begin{aligned}\Omega(N, X) &= 2^N \left( P_N(+X) +P_N(-X) \right) \\ &= \frac{2^{N + 2}}{\sqrt{2 \pi N}} \exp\left( -\frac{X^2}{2N} \right).\end{aligned} \hspace{\stretch{1}}(1.1.4)

The entropy associated with a one dimensional polymer of length X is therefore

\begin{aligned}S_N(X) &= - k_{\mathrm{B}} \frac{X^2}{2N} + k_{\mathrm{B}} \ln \frac{2^{N + 2}}{\sqrt{2 \pi N}} \\ &= - k_{\mathrm{B}} \frac{X^2}{2N} + \text{constant}.\end{aligned} \hspace{\stretch{1}}(1.1.5)

Writing S_0 for this constant the free energy is

\begin{aligned}\boxed{F = U - T S = U + k_{\mathrm{B}} T \frac{X^2}{2N} + S_0 T.}\end{aligned} \hspace{\stretch{1}}(1.1.6)

Change in free energy.

At constant temperature, stretching the polymer from its end being at X to a larger distance X + \Delta X, results in a free energy change of

\begin{aligned}\Delta F &= F( X + \Delta X ) - F(X) \\ &= \frac{k_{\mathrm{B}} T}{2N} \left( (X + \Delta X)^2 - X^2 \right) \\ &= \frac{k_{\mathrm{B}} T}{2N} \left( 2 X \Delta X + (\Delta X)^2 \right)\end{aligned} \hspace{\stretch{1}}(1.1.7)

If \Delta X is assumed small, our constant temperature change in free energy \Delta F \approx (\partial F/\partial X)_T \Delta X is

\begin{aligned}\boxed{\Delta F = \frac{k_{\mathrm{B}} T}{N} X \Delta X.}\end{aligned} \hspace{\stretch{1}}(1.1.8)

Temperature dependent spring constant.

I found the statement and subsequent correction of the problem statement somewhat confusing. To figure this all out, I thought it was reasonable to step back and relate free energy to the entropic force explicitly.

Consider temporarily a general thermodynamic system, for which we have by definition free energy and thermodynamic identity respectively

\begin{aligned}F = U - T S,\end{aligned} \hspace{\stretch{1}}(1.0.9a)

\begin{aligned}dU = T dS - P dV.\end{aligned} \hspace{\stretch{1}}(1.0.9b)

The differential of the free energy is

\begin{aligned}dF &= dU - T dS - S dT \\ &= -P dV - S dT \\ &= \left( \frac{\partial {F}}{\partial {T}} \right)_V dT+\left( \frac{\partial {F}}{\partial {V}} \right)_T dV.\end{aligned} \hspace{\stretch{1}}(1.0.10)

Forming the wedge product with dT, we arrive at the two form

\begin{aligned}0 &= \left( \left( P + \left( \frac{\partial {F}}{\partial {V}} \right)_T \right) dV + \left( S + \left( \frac{\partial {F}}{\partial {T}} \right)_V \right) dT \right)\wedge dT \\ &= \left( P + \left( \frac{\partial {F}}{\partial {V}} \right)_T \right) dV \wedge dT,\end{aligned} \hspace{\stretch{1}}(1.0.11)

This provides the relation between free energy and the “pressure” for the system

\begin{aligned}P = - \left( \frac{\partial {F}}{\partial {V}} \right)_T.\end{aligned} \hspace{\stretch{1}}(1.0.12)

For a system with a constant cross section \Delta A, dV = \Delta A dX, so the force associated with the system is

\begin{aligned}f &= P \Delta A \\ &= - \frac{1}{{\Delta A}} \left( \frac{\partial {F}}{\partial {X}} \right)_T \Delta A,\end{aligned} \hspace{\stretch{1}}(1.0.13)

or

\begin{aligned}f = - \left( \frac{\partial {F}}{\partial {X}} \right)_T.\end{aligned} \hspace{\stretch{1}}(1.0.14)

Okay, now we have a relation between the force and the rate of change of the free energy

\begin{aligned}f(X) = -\frac{k_{\mathrm{B}} T}{N} X.\end{aligned} \hspace{\stretch{1}}(1.0.15)

Our temperature dependent “entropic spring constant” in analogy with f = -k X, is therefore

\begin{aligned}\boxed{k = \frac{k_{\mathrm{B}} T}{N}.}\end{aligned} \hspace{\stretch{1}}(1.0.16)

Question: Independent one-dimensional harmonic oscillators (2013 problem set 5, p2)

Consider a set of N independent classical harmonic oscillators, each having a frequency \omega.

Part a

Find the canonical partition at a temperature T for this system of oscillators keeping track of correction factors of Planck constant. (Note that the oscillators are distinguishable, and we do not need 1/N! correction factor.)

Part b

Using this, derive the mean energy and the specific heat at temperature T.

Part c

For quantum oscillators, the partition function of each oscillator is simply \sum_n e^{-\beta E_n} where E_n are the (discrete) energy levels given by (n + 1/2)\hbar \omega, with n = 0,1,2,\cdots. Hence, find the canonical partition function for N independent distinguishable quantum oscillators, and find the mean energy and specific heat at temperature T.

Part d

Show that the quantum results go over into the classical results at high temperature k_{\mathrm{B}} T \gg \hbar \omega, and comment on why this makes sense.

Part e

Also find the low temperature behavior of the specific heat in both classical and quantum cases when k_{\mathrm{B}} T \ll \hbar \omega.

Answer

Classical partition function

For a single particle in one dimension our partition function is

\begin{aligned}Z_1 = \frac{1}{{h}} \int dp dq e^{-\beta \left( \frac{1}{{2 m}} p^2 + \frac{1}{{2}} m \omega^2 q^2 \right)},\end{aligned} \hspace{\stretch{1}}(1.0.17)

with

\begin{aligned}a = \sqrt{\frac{\beta}{2 m}} p\end{aligned} \hspace{\stretch{1}}(1.0.18a)

\begin{aligned}b = \sqrt{\frac{\beta m}{2}} \omega q,\end{aligned} \hspace{\stretch{1}}(1.0.18b)

we have

\begin{aligned}Z_1 &= \frac{1}{{h \omega}} \sqrt{\frac{2 m}{\beta}} \sqrt{\frac{2}{\beta m}} \int da db e^{-a^2 - b^2} \\ &= \frac{2}{\beta h \omega}2 \pi \int_0^\infty r e^{-r^2} \\ &= \frac{2 \pi}{\beta h \omega} \\ &= \frac{1}{\beta \hbar \omega}.\end{aligned} \hspace{\stretch{1}}(1.0.19)

So for N distinguishable classical one dimensional harmonic oscillators we have

\begin{aligned}\boxed{Z_N(T) = Z_1^N = \left( \frac{k_{\mathrm{B}} T}{\hbar \omega} \right)^N.}\end{aligned} \hspace{\stretch{1}}(1.0.20)

Classical mean energy and heat capacity

From the free energy

\begin{aligned}F = -k_{\mathrm{B}} T \ln Z_N = N k_{\mathrm{B}} T \ln (\beta \hbar \omega),\end{aligned} \hspace{\stretch{1}}(1.0.21)

we can compute the mean energy

\begin{aligned}U &= \frac{1}{{k_{\mathrm{B}}}} \frac{\partial {}}{\partial {\beta}} \left( \frac{F}{T} \right) \\ &= N \frac{\partial {}}{\partial {\beta}} \ln (\beta \hbar \omega) \\ &= \frac{N }{\beta},\end{aligned} \hspace{\stretch{1}}(1.0.22)

or

\begin{aligned}\boxed{U = N k_{\mathrm{B}} T.}\end{aligned} \hspace{\stretch{1}}(1.0.23)

The specific heat follows immediately

\begin{aligned}\boxed{C_{\mathrm{V}} = \frac{\partial {U}}{\partial {T}} = N k_{\mathrm{B}}.}\end{aligned} \hspace{\stretch{1}}(1.0.24)

Quantum partition function, mean energy and heat capacity

For a single one dimensional quantum oscillator, our partition function is

\begin{aligned}Z_1 &= \sum_{n = 0}^\infty e^{-\beta \hbar \omega \left( n + \frac{1}{{2}} \right)} \\ &= e^{-\beta \hbar \omega/2}\sum_{n = 0}^\infty e^{-\beta \hbar \omega n} \\ &= \frac{e^{-\beta \hbar \omega/2}}{1 - e^{-\beta \hbar \omega}} \\ &= \frac{1}{e^{\beta \hbar \omega/2} - e^{-\beta \hbar \omega/2}} \\ &= \frac{1}{{\sinh(\beta \hbar \omega/2)}}.\end{aligned} \hspace{\stretch{1}}(1.0.25)

Assuming distinguishable quantum oscillators, our N particle partition function is

\begin{aligned}\boxed{Z_N(\beta) = \frac{1}{{\sinh^N(\beta \hbar \omega/2)}}.}\end{aligned} \hspace{\stretch{1}}(1.0.26)

This time we don’t add the 1/\hbar correction factor, nor the N! indistinguishability correction factor.

Our free energy is

\begin{aligned}F = N k_{\mathrm{B}} T \ln \sinh(\beta \hbar \omega/2),\end{aligned} \hspace{\stretch{1}}(1.0.27)

our mean energy is

\begin{aligned}U &= \frac{1}{{k_{\mathrm{B}}}} \frac{\partial {}}{\partial {\beta}} \frac{F}{T} \\ &= N \frac{\partial {}}{\partial {\beta}}\ln \sinh(\beta \hbar \omega/2) \\ &= N \frac{\cosh( \beta \hbar \omega/2 )}{\sinh(\beta \hbar \omega/2)} \frac{\hbar \omega}{2},\end{aligned} \hspace{\stretch{1}}(1.0.28)

or

\begin{aligned}\boxed{U(T)= \frac{N \hbar \omega}{2} \coth \left( \frac{\hbar \omega}{2 k_{\mathrm{B}} T} \right).}\end{aligned} \hspace{\stretch{1}}(1.0.29)

This is plotted in fig. 1.1.

Fig 1.1: Mean energy for N one dimensional quantum harmonic oscillators

With \coth'(x) = -1/\sinh^2(x), our specific heat is

\begin{aligned}C_{\mathrm{V}} &= \frac{\partial {U}}{\partial {T}} \\ &= \frac{N \hbar \omega}{2} \frac{-1}{\sinh^2 \left( \frac{\hbar \omega}{2 k_{\mathrm{B}} T} \right)} \frac{\hbar \omega}{2 k_{\mathrm{B}}} \left( \frac{-1}{T^2} \right),\end{aligned} \hspace{\stretch{1}}(1.0.30)

or

\begin{aligned}\boxed{C_{\mathrm{V}} = N k_{\mathrm{B}}\left( \frac{\hbar \omega}{2 k_{\mathrm{B}} T \sinh \left( \frac{\hbar \omega}{2 k_{\mathrm{B}} T} \right) } \right)^2.}\end{aligned} \hspace{\stretch{1}}(1.0.31)

Classical limits

In the high temperature limit 1 \gg \hbar \omega/k_{\mathrm{B}} T, we have

\begin{aligned}\cosh \left( \frac{\hbar \omega}{2 k_{\mathrm{B}} T} \right)\approx 1\end{aligned} \hspace{\stretch{1}}(1.0.32)

\begin{aligned}\sinh \left( \frac{\hbar \omega}{2 k_{\mathrm{B}} T} \right)\approx \frac{\hbar \omega}{2 k_{\mathrm{B}} T},\end{aligned} \hspace{\stretch{1}}(1.0.33)

so

\begin{aligned}U \approx N \frac{\not{{\hbar \omega}}}{\not{{2}}} \frac{\not{{2}} k_{\mathrm{B}} T}{\not{{\hbar \omega}}},\end{aligned} \hspace{\stretch{1}}(1.0.34)

or

\begin{aligned}U(T) \approx N k_{\mathrm{B}} T,\end{aligned} \hspace{\stretch{1}}(1.0.35)

matching the classical result of eq. 1.0.23. Similarly from the quantum specific heat result of eq. 1.0.31, we have

\begin{aligned}C_{\mathrm{V}}(T) \approx N k_{\mathrm{B}}\left( \frac{\hbar \omega}{2 k_{\mathrm{B}} T \left( \frac{\hbar \omega}{2 k_{\mathrm{B}} T} \right) } \right)^2= N k_{\mathrm{B}}.\end{aligned} \hspace{\stretch{1}}(1.0.36)

This matches our classical result from eq. 1.0.24. We expect this equivalence at high temperatures since our quantum harmonic partition function eq. 1.0.26 is approximately

\begin{aligned}Z_N \approx \frac{2}{\beta \hbar \omega},\end{aligned} \hspace{\stretch{1}}(1.0.37)

This differs from the classical partition function only by this factor of 2. While this alters the free energy by k_{\mathrm{B}} T \ln 2, it doesn’t change the mean energy since {\partial {(k_{\mathrm{B}} \ln 2)}}/{\partial {\beta}} = 0. At high temperatures the mean energy are large enough that the quantum nature of the system has no significant effect.

Low temperature limits

For the classical case the heat capacity was constant (C_{\mathrm{V}} = N k_{\mathrm{B}}), all the way down to zero. For the quantum case the heat capacity drops to zero for low temperatures. We can see that via L’hopitals rule. With x = \hbar \omega \beta/2 the low temperature limit is

\begin{aligned}\lim_{T \rightarrow 0} C_{\mathrm{V}} &= N k_{\mathrm{B}} \lim_{x \rightarrow \infty} \frac{x^2}{\sinh^2 x} \\ &= N k_{\mathrm{B}} \lim_{x \rightarrow \infty} \frac{2x }{2 \sinh x \cosh x} \\ &= N k_{\mathrm{B}} \lim_{x \rightarrow \infty} \frac{1 }{\cosh^2 x + \sinh^2 x} \\ &= N k_{\mathrm{B}} \lim_{x \rightarrow \infty} \frac{1 }{\cosh (2 x) } \\ &= 0.\end{aligned} \hspace{\stretch{1}}(1.0.38)

We also see this in the plot of fig. 1.2.

Fig 1.2: Specific heat for N quantum oscillators

Question: Quantum electric dipole (2013 problem set 5, p3)

A quantum electric dipole at a fixed space point has its energy determined by two parts – a part which comes from its angular motion and a part coming from its interaction with an applied electric field \mathcal{E}. This leads to a quantum Hamiltonian

\begin{aligned}H = \frac{\mathbf{L} \cdot \mathbf{L}}{2 I} - \mu \mathcal{E} L_z,\end{aligned} \hspace{\stretch{1}}(1.0.39)

where I is the moment of inertia, and we have assumed an electric field \mathcal{E} = \mathcal{E} \hat{\mathbf{z}}. This Hamiltonian has eigenstates described by spherical harmonics Y_{l, m}(\theta, \phi), with m taking on 2l+1 possible integral values, m = -l, -l + 1, \cdots, l -1, l. The corresponding eigenvalues are

\begin{aligned}\lambda_{l, m} = \frac{l(l+1) \hbar^2}{2I} - \mu \mathcal{E} m \hbar.\end{aligned} \hspace{\stretch{1}}(1.0.40)

(Recall that l is the total angular momentum eigenvalue, while m is the eigenvalue corresponding to L_z.)

Part a

Schematically sketch these eigenvalues as a function of \mathcal{E} for l = 0,1,2.

Part b

Find the quantum partition function, assuming only l = 0 and l = 1 contribute to the sum.

Part c

Using this partition function, find the average dipole moment \mu \left\langle{{L_z}}\right\rangle as a function of the electric field and temperature for small electric fields, commenting on its behavior at very high temperature and very low temperature.

Part d

Estimate the temperature above which discarding higher angular momentum states, with l \ge 2, is not a good approximation.

Answer

Sketch the energy eigenvalues

Let’s summarize the values of the energy eigenvalues \lambda_{l,m} for l = 0, 1, 2 before attempting to plot them.

l = 0

For l = 0, the azimuthal quantum number can only take the value m = 0, so we have

\begin{aligned}\lambda_{0,0} = 0.\end{aligned} \hspace{\stretch{1}}(1.0.41)

l = 1

For l = 1 we have

\begin{aligned}\frac{l(l+1)}{2} = 1(2)/2 = 1,\end{aligned} \hspace{\stretch{1}}(1.0.42)

so we have

\begin{aligned}\lambda_{1,0} = \frac{\hbar^2}{I} \end{aligned} \hspace{\stretch{1}}(1.0.43a)

\begin{aligned}\lambda_{1,\pm 1} = \frac{\hbar^2}{I} \mp \mu \mathcal{E} \hbar.\end{aligned} \hspace{\stretch{1}}(1.0.43b)

l = 2

For l = 2 we have

\begin{aligned}\frac{l(l+1)}{2} = 2(3)/2 = 3,\end{aligned} \hspace{\stretch{1}}(1.0.44)

so we have

\begin{aligned}\lambda_{2,0} = \frac{3 \hbar^2}{I} \end{aligned} \hspace{\stretch{1}}(1.0.45a)

\begin{aligned}\lambda_{2,\pm 1} = \frac{3 \hbar^2}{I} \mp \mu \mathcal{E} \hbar\end{aligned} \hspace{\stretch{1}}(1.0.45b)

\begin{aligned}\lambda_{2,\pm 2} = \frac{3 \hbar^2}{I} \mp 2 \mu \mathcal{E} \hbar.\end{aligned} \hspace{\stretch{1}}(1.0.45c)

These are sketched as a function of \mathcal{E} in fig. 1.3.

Fig 1.3: Energy eigenvalues for l = 0,1, 2

Partition function

Our partition function, in general, is

\begin{aligned}Z &= \sum_{l = 0}^\infty \sum_{m = -l}^l e^{-\lambda_{l,m} \beta} \\ &= \sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right)\sum_{m = -l}^l e^{ m \mu \hbar \mathcal{E} \beta}.\end{aligned} \hspace{\stretch{1}}(1.0.46)

Dropping all but l = 0, 1 terms this is

\begin{aligned}Z \approx 1 + e^{-\hbar^2 \beta/I} \left( 1 + e^{- \mu \hbar \mathcal{E} \beta } + e^{ \mu \hbar \mathcal{E} \beta} \right),\end{aligned} \hspace{\stretch{1}}(1.0.47)

or

\begin{aligned}\boxed{Z \approx 1 + e^{-\hbar^2 \beta/I} (1 + 2 \cosh\left( \mu \hbar \mathcal{E} \beta \right)).}\end{aligned} \hspace{\stretch{1}}(1.0.48)

Average dipole moment

For the average dipole moment, averaging over both the states and the partitions, we have

\begin{aligned}Z \left\langle{{ \mu L_z }}\right\rangle &= \sum_{l = 0}^\infty \sum_{m = -l}^l {\left\langle {l m} \right\rvert} \mu L_z {\left\lvert {l m} \right\rangle} e^{-\beta \lambda_{l, m}} \\ &= \sum_{l = 0}^\infty \sum_{m = -l}^l \mu {\left\langle {l m} \right\rvert} m \hbar {\left\lvert {l m} \right\rangle} e^{-\beta \lambda_{l, m}} \\ &= \mu \hbar \sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right)\sum_{m = -l}^l m e^{ \mu m \hbar \mathcal{E} \beta} \\ &= \mu \hbar \sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right)\sum_{m = 1}^l m \left( e^{ \mu m \hbar \mathcal{E} \beta} -e^{-\mu m \hbar \mathcal{E} \beta} \right) \\ &= 2 \mu \hbar \sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right)\sum_{m = 1}^l m \sinh (\mu m \hbar \mathcal{E} \beta).\end{aligned} \hspace{\stretch{1}}(1.0.49)

For the cap of l = 1 we have

\begin{aligned}\left\langle{{ \mu L_z }}\right\rangle \approx\frac{2 \mu \hbar }{Z}\left( 1 (0) + e^{-\hbar^2 \beta/ I} \sinh (\mu \hbar \mathcal{E} \beta) \right)\approx2 \mu \hbar \frac{e^{-\hbar^2 \beta/ I} \sinh (\mu \hbar \mathcal{E} \beta) }{1 + e^{-\hbar^2 \beta/I} \left( 1 + 2 \cosh( \mu \hbar \mathcal{E} \beta) \right)},\end{aligned} \hspace{\stretch{1}}(1.0.50)

or

\begin{aligned}\boxed{\left\langle{{ \mu L_z }}\right\rangle \approx\frac{2 \mu \hbar \sinh (\mu \hbar \mathcal{E} \beta) }{e^{\hbar^2 \beta/I} + 1 + 2 \cosh( \mu \hbar \mathcal{E} \beta)}.}\end{aligned} \hspace{\stretch{1}}(1.0.51)

This is plotted in fig. 1.4.

Fig 1.4: Dipole moment

For high temperatures \mu \hbar \mathcal{E} \beta \ll 1 or k_{\mathrm{B}} T \gg \mu \hbar \mathcal{E}, expanding the hyperbolic sine and cosines to first and second order respectively and the exponential to first order we have

\begin{aligned}\left\langle{{ \mu L_z }}\right\rangle &\approx 2 \mu \hbar \frac{ \frac{\mu \hbar \mathcal{E}}{k_{\mathrm{B}} T}}{ 4 + \frac{h^2}{I k_{\mathrm{B}} T} + \left( \frac{\mu \hbar \mathcal{E}}{k_{\mathrm{B}} T} \right)^2}=\frac{2 (\mu \hbar)^2 \mathcal{E} k_{\mathrm{B}} T}{4 (k_{\mathrm{B}} T)^2 + \hbar^2 k_{\mathrm{B}} T/I + (\mu \hbar \mathcal{E})^2 } \\ &\approx\frac{(\mu \hbar)^2 \mathcal{E}}{4 k_{\mathrm{B}} T}.\end{aligned} \hspace{\stretch{1}}(1.0.52)

Our dipole moment tends to zero approximately inversely proportional to temperature. These last two respective approximations are plotted along with the all temperature range result in fig. 1.5.

Fig 1.5: High temperature approximations to dipole moments

For low temperatures k_{\mathrm{B}} T \ll \mu \hbar \mathcal{E}, where \mu \hbar \mathcal{E} \beta \gg 1 we have

\begin{aligned}\left\langle{{ \mu L_z }}\right\rangle \approx\frac{ 2 \mu \hbar e^{\mu \hbar \mathcal{E} \beta} }{ e^{\hbar^2 \beta/I} + e^{\mu \hbar \mathcal{E} \beta} }=\frac{ 2 \mu \hbar }{ 1 + e^{ (\hbar^2 \beta/I - \mu \hbar \mathcal{E})/{k_{\mathrm{B}} T} } }.\end{aligned} \hspace{\stretch{1}}(1.0.53)

Provided the electric field is small enough (which means here that \mathcal{E} < \hbar/\mu I) this will look something like fig. 1.6.

Fig 1.6: Low temperature dipole moment behavior

Approximation validation

In order to validate the approximation, let’s first put the partition function and the numerator of the dipole moment into a tidier closed form, evaluating the sums over the radial indices l. First let’s sum the exponentials for the partition function, making an n = m + l

\begin{aligned}\sum_{m = -l}^l a^m &= a^{-l} \sum_{n=0}^{2l} a^n \\ &= a^{-l} \frac{a^{2l + 1} - 1}{a - 1} \\ &= \frac{a^{l + 1} - a^{-l}}{a - 1} \\ &= \frac{a^{l + 1/2} - a^{-(l+1/2)}}{a^{1/2} - a^{-1/2}}.\end{aligned} \hspace{\stretch{1}}(1.0.54)

With a substitution of a = e^b, we have

\begin{aligned}\boxed{\sum_{m = -l}^l e^{b m}=\frac{\sinh(b(l + 1/2))}{\sinh(b/2)}.}\end{aligned} \hspace{\stretch{1}}(1.0.55)

Now we can sum the azimuthal exponentials for the dipole moment. This sum is of the form

\begin{aligned}\sum_{m = -l}^l m a^m &= a \left( \sum_{m = 1}^l + \sum_{m = -l}^{-1} \right)m a^{m-1} \\ &= a \frac{d}{da}\sum_{m = 1}^l\left( a^{m} + a^{-m} \right) \\ &= a \frac{d}{da}\left( \sum_{m = -l}^l a^m - \not{{1}} \right) \\ &= a \frac{d}{da}\left( \frac{a^{l + 1/2} - a^{-(l+1/2)}}{a^{1/2} - a^{-1/2}} \right).\end{aligned} \hspace{\stretch{1}}(1.0.56)

With a = e^{b}, and 1 = a db/da, we have

\begin{aligned}a \frac{d}{da} = a \frac{db}{da} \frac{d}{db} = \frac{d}{db},\end{aligned} \hspace{\stretch{1}}(1.0.57)

we have

\begin{aligned}\sum_{m = -l}^l m e^{b m}= \frac{d}{db}\left( \frac{ \sinh(b(l + 1/2)) }{ \sinh(b/2) } \right).\end{aligned} \hspace{\stretch{1}}(1.0.58)

With a little help from Mathematica to simplify that result we have

\begin{aligned}\boxed{\sum_{m = -l}^l m e^{b m}=\frac{l \sinh(b (l+1)) - (l+1) \sinh(b l) }{2 \sinh^2(b/2)}.}\end{aligned} \hspace{\stretch{1}}(1.0.59)

We can now express the average dipole moment with only sums over radial indices l

\begin{aligned}\left\langle{{ \mu L_z }}\right\rangle &= \mu \hbar \frac{ \sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right) \sum_{m = -l}^l m e^{ \mu m \hbar \mathcal{E} \beta}}{ \sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right) \sum_{m = -l}^l e^{ m \mu \hbar \mathcal{E} \beta}} \\ &= \mu \hbar\frac{ \sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right) \frac { l \sinh(\mu \hbar \mathcal{E} \beta (l+1)) - (l+1) \sinh(\mu \hbar \mathcal{E} \beta l) } { 2 \sinh^2(\mu \hbar \mathcal{E} \beta/2) }}{\sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right) \frac { \sinh(\mu \hbar \mathcal{E} \beta(l + 1/2)) } { \sinh(\mu \hbar \mathcal{E} \beta/2) }}.\end{aligned} \hspace{\stretch{1}}(1.0.60)

So our average dipole moment is

\begin{aligned}\boxed{\left\langle{{ \mu L_z }}\right\rangle = \frac{\mu \hbar }{2 \sinh(\mu \hbar \mathcal{E} \beta/2)}\frac{ \sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right)\left( l \sinh(\mu \hbar \mathcal{E} \beta (l+1)) - (l+1) \sinh(\mu \hbar \mathcal{E} \beta l) \right)}{ \sum_{l = 0}^\infty \exp\left( -\frac{l (l+1) \hbar^2 \beta}{2 I} \right) \sinh(\mu \hbar \mathcal{E} \beta(l + 1/2))}.}\end{aligned} \hspace{\stretch{1}}(1.0.61)

The hyperbolic sine in the denominator from the partition function and the difference of hyperbolic sines in the numerator both grow fast. This is illustrated in fig. 1.7.

Fig 1.7: Hyperbolic sine plots for dipole moment

Let’s look at the order of these hyperbolic sines for large arguments. For the numerator we have a difference of the form

\begin{aligned}x \sinh( x + 1 ) - (x + 1) \sinh ( x ) &= \frac{1}{{2}} \left( x \left( e^{x + 1} - e^{-x - 1} \right) -(x +1 ) \left( e^{x } - e^{-x } \right) \right)\approx\frac{1}{{2}} \left( x e^{x + 1} -(x +1 ) e^{x } \right) \\ &= \frac{1}{{2}} \left( x e^{x} ( e - 1 ) - e^x \right) \\ &= O(x e^x).\end{aligned} \hspace{\stretch{1}}(1.0.62)

For the hyperbolic sine from the partition function we have for large x

\begin{aligned}\sinh( x + 1/2) = \frac{1}{{2}} \left( e^{x + 1/2} - e^{-x - 1/2} \right)\approx \frac{\sqrt{e}}{2} e^{x}= O(e^x).\end{aligned} \hspace{\stretch{1}}(1.0.63)

While these hyperbolic sines increase without bound as l increases, we have a negative quadratic dependence on l in the \mathbf{L}^2 contribution to these sums, provided that is small enough we can neglect the linear growth of the hyperbolic sines. We wish for that factor to be large enough that it dominates for all l. That is

\begin{aligned}\frac{l(l+1) \hbar^2}{2 I k_{\mathrm{B}} T} \gg 1,\end{aligned} \hspace{\stretch{1}}(1.0.64)

or

\begin{aligned}T \ll \frac{l(l+1) \hbar^2}{2 I k_{\mathrm{B}} T}.\end{aligned} \hspace{\stretch{1}}(1.0.65)

Observe that the RHS of this inequality, for l = 1, 2, 3, 4, \cdots satisfies

\begin{aligned}\frac{\hbar^2 }{I k_{\mathrm{B}}}<\frac{3 \hbar^2 }{I k_{\mathrm{B}}}<\frac{6 \hbar^2 }{I k_{\mathrm{B}}}<\frac{10 \hbar^2 }{I k_{\mathrm{B}}}< \cdots\end{aligned} \hspace{\stretch{1}}(1.0.66)

So, for small electric fields, our approximation should be valid provided our temperature is constrained by

\begin{aligned}\boxed{T \ll \frac{\hbar^2 }{I k_{\mathrm{B}}}.}\end{aligned} \hspace{\stretch{1}}(1.0.67)

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

An updated compilation of notes, for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 3, 2013

In A compilation of notes, so far, for ‘PHY452H1S Basic Statistical Mechanics’ I posted a link this compilation of statistical mechanics course notes.

That compilation now all of the following too (no further updates will be made to any of these) :

February 28, 2013 Rotation of diatomic molecules

February 28, 2013 Helmholtz free energy

February 26, 2013 Statistical and thermodynamic connection

February 24, 2013 Ideal gas

February 16, 2013 One dimensional well problem from Pathria chapter II

February 15, 2013 1D pendulum problem in phase space

February 14, 2013 Continuing review of thermodynamics

February 13, 2013 Lightning review of thermodynamics

February 11, 2013 Cartesian to spherical change of variables in 3d phase space

February 10, 2013 n SHO particle phase space volume

February 10, 2013 Change of variables in 2d phase space

February 10, 2013 Some problems from Kittel chapter 3

February 07, 2013 Midterm review, thermodynamics

February 06, 2013 Limit of unfair coin distribution, the hard way

February 05, 2013 Ideal gas and SHO phase space volume calculations

February 03, 2013 One dimensional random walk

February 02, 2013 1D SHO phase space

February 02, 2013 Application of the central limit theorem to a product of random vars

January 31, 2013 Liouville’s theorem questions on density and current

January 30, 2013 State counting

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment »

Application of the central limit theorem to a product of random vars

Posted by peeterjoot on February 1, 2013

[Click here for a PDF of this post with nicer formatting]

Our midterm had a question asking what the central limit theorem said about a product of random variables. Say, Y = X_1 X_2 \cdots X_N, where the random variables X_k had mean and variance \mu and \sigma^2 respectively. My answer was to state that the Central limit theorem didn’t apply since it was for a sum of independent and identical random variables. I also stated the theorem and said what it said of such summed random variables.

Wondering if this was really all the question required, I went looking to see if there was in fact some way to apply the central limit theorem to such a product and found http://math.stackexchange.com/q/82133. The central limit theorem can be applied to the logarithm of such a product (provided all the random variables are strictly positive)

For example, if we write

\begin{aligned}Z = \ln Y = \sum_{k = 1}^N \ln X_k,\end{aligned} \hspace{\stretch{1}}(1.0.1)

now we have something that the central limit theorem can be applied to. It will be interesting to see if this is the answer that the midterm was looking for. It is one that wasn’t obvious enough for me to think of it at the time. In fact, it’s also not something that we can even state a precise central limit theorem result for, because we don’t have enough information to state the mean and variance of the logarithm of the random vars X_k. For example, if the random vars are continuous, we have

\begin{aligned}\left\langle{{\ln X}}\right\rangle = \int \rho(X) \ln X dX.\end{aligned} \hspace{\stretch{1}}(1.0.2)

Conceivably, if we knew all the moments of X we could expand the logarithm in Taylor series. In fact we need more than that. If we suppose that 0 < X < 2 \mu, so that \left\lvert {X/\mu - 1} \right\rvert \le 1, we can write

\begin{aligned}\ln X &= \ln \mu + (X - \mu) \\ &= \ln \mu + \ln \left( { 1 + \left(\frac{X}{\mu} - 1\right) } \right) \\ &= \ln \mu + \sum_{k = 1}^{\infty} (-1)^{k+1} \frac{\left( {\frac{X}{\mu} -1} \right)^k}{k}.\end{aligned} \hspace{\stretch{1}}(1.0.3)

With such a bounding for the random variable X we’d have

\begin{aligned}\left\langle{{\ln X}}\right\rangle = \ln \mu + \sum_{k = 1}^{\infty} \frac{(-1)^{k+1}}{k} \left\langle{{\left( {\frac{X}{\mu} -1} \right)^k}}\right\rangle\end{aligned} \hspace{\stretch{1}}(1.0.4)

We need all the higher order moments of X/\mu - 1 (or equivalently all the moments of X), and can’t just assume that \left\langle{{\ln X}}\right\rangle = \ln \mu.

Suppose instead that we just assume that it is possible to find the mean and variance of the logarithm of the random variables X_k, say

\begin{subequations}

\begin{aligned}\mu_{\mathrm{ln}} = \left\langle{{\ln X}}\right\rangle\end{aligned} \hspace{\stretch{1}}(1.0.5a)

\begin{aligned}\sigma_{\mathrm{ln}}^2 = \left\langle{{(\ln X)^2}}\right\rangle - \left\langle{{\ln X}}\right\rangle^2.\end{aligned} \hspace{\stretch{1}}(1.0.5b)

\end{subequations}

Now we can state that for large N the random variable Z has a distribution approximated by

\begin{aligned}\rho(Z) = \frac{1}{{\sigma_{\mathrm{ln}} \sqrt{2 \pi N}}} \exp\left( - \frac{ (\ln X - N \mu_{\mathrm{ln}})^2}{2 N \sigma_{\mathrm{ln}}^2} \right).\end{aligned} \hspace{\stretch{1}}(1.0.6)

Given that, we can say that the random variable Y = X_1 X_2 \cdots X_N, is the exponential of random variable with the distribution given approximately (for large N) by 1.0.6.

It will be interesting to see if this is the answer that we were asked to state. I’m guessing not. If it was, then a lot more cleverness than I had was expected.

Posted in Math and Physics Learning. | Tagged: , , , , , , | Leave a Comment »

PHY452H1S Basic Statistical Mechanics. Problem Set 1: Binomial distributions

Posted by peeterjoot on January 20, 2013

[Click here for a PDF of this post with nicer formatting]

Disclaimer

This is an ungraded set of answers to the problems posed.

Question: Limiting form of the binomial distribution

Starting from the simple case of the binomial distribution

\begin{aligned}P_N(X) = 2^{-N} \frac{N!}{\left(\frac{N + X}{2}\right)!\left(\frac{N - X}{2}\right)!}\end{aligned} \hspace{\stretch{1}}(1.0.1)

derive the Gaussian distribution which results when N \gg 1 and {\left\lvert{X}\right\rvert} \ll N.

Answer

We’ll work with the logarithms of P_N(X).

Note that the logarithm of the Stirling approximation takes the form

\begin{aligned}\ln a! &\approx \ln \sqrt{2\pi} + \frac{1}{{2}} \ln a + a \ln a - a \\ &=\ln \sqrt{2\pi} + \left( a + \frac{1}{{2}} \right) \ln a - a\end{aligned} \hspace{\stretch{1}}(1.0.2)

Using this we have

\begin{aligned}\ln \left((N + X)/2\right)!=\ln \sqrt{2 \pi}+\left(\frac{N + 1 + X}{2} \right)\left(\ln \left(1 + \frac{X}{N}\right)+ \ln \frac{N}{2}\right)- \frac{N + X}{2}\end{aligned} \hspace{\stretch{1}}(1.0.3)

Adding \ln \left( (N + X)/2 \right)! + \ln \left( (N - X)/2 \right)!, we have

\begin{aligned}2 \ln \sqrt{2 \pi}-N+\left(\frac{N + 1 + X}{2} \right)\left(\ln \left(1 + \frac{X}{N}\right)+ \ln \frac{N}{2}\right)+\left(\frac{N + 1 - X}{2} \right)\left(\ln \left(1 - \frac{X}{N}\right)+ \ln \frac{N}{2}\right)=2 \ln \sqrt{2 \pi}-N+\left(\frac{N + 1}{2} \right)\left(\ln \left(1 - \frac{X^2}{N^2}\right)+ 2 \ln \frac{N}{2}\right)+\frac{X}{2}\left( \ln \left( 1 + \frac{X}{N} \right)- \ln \left( 1 - \frac{X}{N} \right)\right)\end{aligned} \hspace{\stretch{1}}(1.0.4)

Recall that we can expand the log around 1 with the slowly converging Taylor series

\begin{aligned}\ln( 1 + x) = x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4}\end{aligned} \hspace{\stretch{1}}(1.0.5a)

\begin{aligned}\ln( 1 - x) = -x - \frac{x^2}{2} - \frac{x^3}{3} - \frac{x^4}{4},\end{aligned} \hspace{\stretch{1}}(1.0.5b)

but if x \ll 1 the first order term will dominate, so in this case where we assume X \ll N, we can approximate this sum of factorial logs to first order as

\begin{aligned}2 \ln \sqrt{2 \pi} -N+\left(\frac{N + 1}{2} \right)\left(- \frac{X^2}{N^2}+ 2 \ln \frac{N}{2}\right)+\frac{X}{2}\left( \frac{X}{N} + \frac{X}{N}\right) &= 2 \ln \sqrt{2 \pi} -N+ \frac{X^2}{N} \left( - \frac{N + 1}{2N} + 1\right)+ (N + 1) \ln \frac{N}{2} &\approx 2 \ln \sqrt{2 \pi} -N+ \frac{X^2}{2 N} + (N + 1) \ln \frac{N}{2}.\end{aligned} \hspace{\stretch{1}}(1.0.6)

Putting the bits together, we have

\begin{aligned}\ln P_N(X) &\approx - N \ln 2 + \left( N + \frac{1}{{2}}\right) \ln N - \not{{N}} - \ln \sqrt{2 \pi} + \not{{N}} -\frac{X^2}{2N} - (N + 1) \ln \frac{N}{2} \\ &= \left(-\not{{N}} + (\not{{N}} + 1) \ln 2\right)+\left(\not{{N}} + \frac{1}{{2}} - \not{{N}} - 1\right) \ln N- \ln \sqrt{2 \pi} - \frac{X^2}{2N} \\ &= \ln \left(\frac{2}{\sqrt{2 \pi N}}\right)-\frac{X^2}{2 N}\end{aligned} \hspace{\stretch{1}}(1.0.7)

Exponentiating gives us the desired result

\begin{aligned}\boxed{P_N(X) \rightarrow \frac{2}{\sqrt{2 \pi N}} e^{-\frac{X^2}{2 N}}.}\end{aligned} \hspace{\stretch{1}}(1.0.8)

Question: Binomial distribution for biased coin

Consider the more general case of a binomial distribution where the probability of a head is r and a tail is (1 - r) (a biased coin). With \text{head} = -1 and \text{tail} = +1, obtain the binomial distribution P_N(r,X) for obtaining a total of X from N coin tosses. What is the limiting form of this distribution when N \gg 1 and \left\langle{X - \left\langle{X}\right\rangle}\right\rangle \ll N? The latter condition simply means that I need to carry out any Taylor expansions in X about its mean value \left\langle{{X}}\right\rangle. The mean \left\langle{{X}}\right\rangle can be easily computed first in terms of “r”.

Answer

Let’s consider 1, 2, 3, and N tosses in sequence to understand the pattern.

1 toss

The base case has just two possibilities

  1. Heads, P = r, X = -1
  2. Tails, P = (1 - r), X = 1

If k = 0,1 for X = -1, 1 respectively, we have

\begin{aligned}P_1(r, X) = r^{1 - k} (1 - r)^{k}\end{aligned} \hspace{\stretch{1}}(1.0.9)

As a check, when r = 1/2 we have P_1(X) = 1/2

2 tosses

Our sample space is now a bit bigger

  1. (h,h), P = r^2, X = -2
  2. (h,t), P = r (1 - r), X = 0
  3. (t,h), P = r (1 - r), X = 0
  4. (t,t), P = (1 - r)^2, X = 2

Here P is the probability of the ordered sequence, but we are interested only in the probability of each specific value of X. For X = 0 there are \binom{2}{1} = 2 ways of picking a heads, tails combination.

Enumerating the probabilities, as before, with k = 0, 1, 2 for X = -1, 0, 1 respectively, we have

\begin{aligned}P_2(r, X) = r^{2 - k} (1 - r)^{k} \binom{2}{k}\end{aligned} \hspace{\stretch{1}}(1.0.10)

3 tosses

Increasing our sample space by one more toss our possibilities for all ordered triplets of toss results is

  1. (h,h,h), P = r^3, X = -3
  2. (h,h,t), P = r^2(1 - r), X = -1
  3. (h,t,h), P = r^2(1 - r), X = -1
  4. (h,t,t), P = r(1 - r)^2, X = 1
  5. (t,h,h), P = r^2(1 - r), X = -1
  6. (t,h,t), P = r(1 - r)^2, X = 1
  7. (t,t,h), P = r(1 - r)^2, X = 1
  8. (t,t,t), P = r (1 - r), X = 0
  9. (t,t,t), P = (1 - r)^3, X = 3

Here P is the probability of the ordered sequence, but we are still interested only in the probability of each specific value of X. We see that we have
\binom{3}{1} = \binom{3}{2} = 3 ways of picking some ordering of either (h,h,t) or (t,t,h)

Now enumerating the possibilities with k = 0, 1, 2, 3 for X = -3, -1, 1, 3 respectively, we have

\begin{aligned}P_3(r, X) = r^{3 - k} (1 - r)^{k} \binom{3}{k}\end{aligned} \hspace{\stretch{1}}(1.0.11)

n tosses

To generalize we need a mapping between our random variable X, and the binomial index k, but we know what that is from the fair coin problem, one of (N-X)/2 or (N + X)/2. To get the signs right, let’s evaluate (N \pm X)/2 for N = 3 and X \in \{3, -1, 1, 3\}

Mapping between k and (N \pm X)/2 for N = 3:

X (N-X)/2 (N+X)/2
-3 3 0
-1 2 1
1 1 2
3 0 3

Using this, we see that the generalization to unfair coins of the binomial distribution is

\begin{aligned}\boxed{P_N(r, X) = r^{\frac{N-X}{2}} (1 - r)^{\frac{N+X}{2}} \frac{N!}{\left(\frac{N + X}{2}\right)!\left(\frac{N - X}{2}\right)!}}\end{aligned} \hspace{\stretch{1}}(1.0.12)

Checking against the fair result, we see that we have the 1/2^N factor when r = 1/2 as expected. Let’s check for X = -1 (two heads, one tail) to see if the exponents are right. That is

\begin{aligned}P_3(r, -1) = r^{\frac{3 + 1}{2}} (1 - r)^{\frac{3 - 1}{2}} \frac{3!}{\left(\frac{3 - 1}{2}\right)!\left(\frac{3 + 1}{2}\right)!}=r^2 (1-r) \frac{3!}{1! 2!}= r^2 (1 - r)\end{aligned} \hspace{\stretch{1}}(1.0.13)

Good, we’ve got a r^2 (two heads) term as desired.

Limiting form

To determine the limiting behavior, we can utilize the Central limit theorem. We first have to calculate the mean and the variance for the N=1 case. The first two moments are

\begin{aligned}\left\langle{{X}}\right\rangle &= -1 r + 1 (1-r) \\ &= 1 - 2 r\end{aligned} \hspace{\stretch{1}}(1.0.14a)

\begin{aligned}\left\langle{{X^2}}\right\rangle &= (-1)^2 r + 1^2 (1-r) \\ &= 1\end{aligned} \hspace{\stretch{1}}(1.0.14b)

and the variance is

\begin{aligned}\left\langle{{X^2}}\right\rangle -\left\langle{{X}}\right\rangle^2 &= 1 - (1 - 2r)^2 \\ &= 1 - ( 1 - 4 r + 4 r^2 ) \\ &= 4 r - 4 r^2 \\ &= 4 r ( 1 - r )\end{aligned} \hspace{\stretch{1}}(1.0.15)

The Central Limit Theorem gives us

\begin{aligned}P_N(r, X) \rightarrow \frac{1}{{ \sqrt{8 \pi N r (1 - r) }}} \exp\left(- \frac{( X - N (1 - 2 r) )^2}{8 N r ( 1 - r )}\right),\end{aligned} \hspace{\stretch{1}}(1.0.16)

however, we saw in [1] that this theorem was derived for continuous random variables. Here we have random variables that only take on either odd or even integer values, with parity depending on whether N is odd or even. We’ll need to double the CLT result to account for this. This gives us

\begin{aligned}\boxed{P_N(r, X) \rightarrow \frac{1}{ \sqrt{2 \pi N r (1 - r) }} \exp\left(- \frac{( X - N (1 - 2 r) )^2}{8 N r ( 1 - r )}\right)}\end{aligned} \hspace{\stretch{1}}(1.0.17)

As a check we note that for r = 1/2 we have r(1-r) = 1/4 and 1 - 2r = 0, so we get

\begin{aligned}P_N(1/2, X) \rightarrow \frac{2}{ \sqrt{2 \pi N }} \exp\left(- \frac{ X^2}{2 N }\right).\end{aligned} \hspace{\stretch{1}}(1.0.18)

Observe that both this and 1.0.8 do not integrate to unity, but to 2. This is expected given the parity of the discrete random variable X. An integral normalization check is really only approximating the sum over integral values of our discrete random variable, and here we want to skip half of those values.

References

[1] Peter Young. Proof of the central limit theorem in statistics, 2009. URL http://physics.ucsc.edu/ peter/116C/clt.pdf. [Online; accessed 13-Jan-2013].

Posted in Math and Physics Learning. | Tagged: , , , , , , , , | Leave a Comment »

PHY452H1S Basic Statistical Mechanics. Lecture 2: Probability. Taught by Prof. Arun Paramekanti

Posted by peeterjoot on January 11, 2013

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. May not be entirely coherent.

Probability

The discrete case is plotted roughly in fig 1.

Fig1: Discrete probability distribution

\begin{aligned}P(x) \ge 1\end{aligned} \hspace{\stretch{1}}(1.0.1a)

\begin{aligned}\sum_x P(x) = 1\end{aligned} \hspace{\stretch{1}}(1.0.1b)

A continuous probability distribution may look like fig 2.

Fig2: Continuous probability distribution

\begin{aligned}\mathcal{P}(x) > 0\end{aligned} \hspace{\stretch{1}}(1.0.2a)

\begin{aligned}\int \mathcal{P}(x) dx = 1\end{aligned} \hspace{\stretch{1}}(1.0.2b)

Probability that event is in the interval x_1 - x_0 = \Delta x is

\begin{aligned}\int_{x_0}^{x_1} \mathcal{P}(x) dx\end{aligned} \hspace{\stretch{1}}(1.0.3)

Central limit theorem

\begin{aligned}x \leftrightarrow P(x)\end{aligned} \hspace{\stretch{1}}(1.0.4)

Suppose we construct a sum of random variables

\begin{aligned}X = \sum_{i = 1}^N x_i\end{aligned} \hspace{\stretch{1}}(1.0.5)

Gambling, coin toss

\begin{aligned}x \rightarrow \left\{\begin{array}{l l}+1 & \quad \mbox{Heads} \\ -1 & \quad \mbox{Tails} \end{array}\right.\end{aligned} \hspace{\stretch{1}}(1.0.6)

If we ask the question about what the total number of heads minus the total number of tails (do we have excess heads, and by how much).
}

Given an average of

\begin{aligned}\left\langle{{x}}\right\rangle = \mu\end{aligned} \hspace{\stretch{1}}(1.0.7)

and a variance (or squared standard deviation) of

\begin{aligned}\left\langle{{x^2}}\right\rangle - \left\langle{{x}}\right\rangle^2 = \sigma^2\end{aligned} \hspace{\stretch{1}}(1.0.8)

we have for the sum of random variables

\begin{aligned}\lim_{N \rightarrow \infty} P(X)= \frac{1}{{\sigma \sqrt{2 \pi}}} \exp\left( - \frac{ x - N \mu}{2 \sigma^2} \right)\end{aligned} \hspace{\stretch{1}}(1.0.9a)

\begin{aligned}\left\langle{{X}}\right\rangle = N \mu\end{aligned} \hspace{\stretch{1}}(1.0.9b)

\begin{aligned}\left\langle{{X^2}}\right\rangle - \left\langle{{X}}\right\rangle^2 = N \sigma^2\end{aligned} \hspace{\stretch{1}}(1.0.9c)

To be proven in the notes not here.

Coin toss

Given

\begin{aligned}P(\text{Heads}) = \frac{1}{{2}}\end{aligned} \hspace{\stretch{1}}(1.0.10a)

\begin{aligned}P(\text{Tails}) = \frac{1}{{2}}\end{aligned} \hspace{\stretch{1}}(1.0.10b)

Our probability distribution may look like fig 3.

Fig3: Discrete probability distribution for Heads and Tails coin tosses

Aside: continuous analogue

Note that the continuous analogue of a distribution like this is

\begin{aligned}\mathcal{P}(x) = \frac{1}{{2}} \delta(x - 1) + \frac{1}{{2}} \delta(x + 1)\end{aligned} \hspace{\stretch{1}}(1.0.11)

2 tosses:

\begin{aligned}(x_1, x_2) \in \{(1, 1), (1, -1), (-1, 1), (-1, -1)\}\end{aligned} \hspace{\stretch{1}}(1.0.12)

\begin{aligned}X \rightarrow 2, 0, 0, 2\end{aligned} \hspace{\stretch{1}}(1.0.13)

Fig4: 2 tosses distribution

3 tosses

\begin{aligned}(x_1, x_2, x_3) \in \{(1, 1, 1), \cdots (-1, -1, -1)\}\end{aligned} \hspace{\stretch{1}}(1.0.14)

  1. X = 3 : 1 way
  2. X = 1 : 3 ways
  3. X = -1 : 3 ways
  4. X = -3 : 1 way

Fig5: 3 tosses

N tosses

We want to find P_N(X). We have

\begin{aligned}\text{Tails} - \text{Heads} = X \end{aligned} \hspace{\stretch{1}}(1.0.15)

\begin{aligned}\text{Total tosses} = \text{Tails} + \text{Heads} = N \end{aligned} \hspace{\stretch{1}}(1.0.16)

So that

  1. Heads: \frac{N - X}{2}
  2. Tails: \frac{N + X}{2}

How many ways can we find a specific event such as the number of ways we find 2 heads and 1 tail? We can enumerate these \{ (H, H, T), (H, T, H), (T, H, H)\}.

The number of ways of choosing just that combination is

\begin{aligned}P_N(X) = \left(\frac{1}{{2}}\right)^N \binom{N}{\frac{N-X}{2}} \quad\mbox{or}\quad\left(\frac{1}{{2}}\right)^N \binom{N}{\frac{N+X}{2}}\end{aligned} \hspace{\stretch{1}}(1.0.17)

we find the Binomial distribution

\begin{aligned}P_N(X) = \left\{\begin{array}{l l}\left(\frac{1}{{2}}\right)^N \frac{N!}{\left(\frac{N-X}{2}\right)\left(\frac{N+X}{2}\right)}& \quad \mbox{if X and N have same parity} \\ 0& \quad \mbox{otherwise} \end{array}\right.\end{aligned} \hspace{\stretch{1}}(1.0.18)

We’ll use the Stirling formula 1.0.36, to find

\begin{aligned}\begin{aligned}P_N(X) &= \left(\frac{1}{{2}}\right)^N \frac{N!}{\left(\frac{N-X}{2}\right)!\left(\frac{N+X}{2}\right)!} \\ &\approx\left( \frac{1}{{2}} \right)^N \frac{ e^{-N} N^N \sqrt{ 2 \pi N} }{ e^{-\frac{N+X}{2}} \left( \frac{N+X}{2}\right)^{\frac{N+X}{2}} \sqrt{ 2 \pi \frac{N+X}{2}} e^{-\frac{N-X}{2}} \left( \frac{N-X}{2}\right)^{\frac{N-X}{2}} \sqrt{ 2 \pi \frac{N-X}{2}} } \\ &=\left( \frac{1}{{2}} \right)^N \frac{ 2 N^N \sqrt{ N} }{ \sqrt{2 \pi}\left( \frac{N+X}{2}\right)^{\frac{N+X}{2}} \sqrt{ N^2 - X^2} \left( \frac{N-X}{2}\right)^{\frac{N-X}{2}} } \\ &=\frac{ 2 N^N \sqrt{ N} }{ \sqrt{2 \pi}\left( N^2 - X^2 \right)^{N/2 + 1/2}\left( \frac{N+X}{N-X}\right)^{X/2} }\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.19)

This can be apparently be simplified to

\begin{aligned}P_N(X) = \frac{2}{\sqrt{2 \pi N}} \exp\left( -\frac{X^2}{2N} \right)\end{aligned} \hspace{\stretch{1}}(1.0.20)

}

Stirling formula

To prove the Stirling formula we’ll use the Gamma (related) function

\begin{aligned}I(\alpha) = \int_0^\infty dy e^{-y} y^\alpha\end{aligned} \hspace{\stretch{1}}(1.0.21)

Observe that we have

\begin{aligned}I(0) = \int_0^\infty dy e^{-y} = 1,\end{aligned} \hspace{\stretch{1}}(1.0.22)

and

\begin{aligned}I(\alpha + 1) = \int_0^\infty dy e^{-y} y^{\alpha + 1}= \int_0^\infty d \left( \frac{e^{-y}}{-1} \right) y^{\alpha + 1} = -\int_0^\infty dy \left( \frac{e^{-y}}{-1} \right) (\alpha + 1) y^{\alpha}= (\alpha + 1)I(y).\end{aligned} \hspace{\stretch{1}}(1.0.23)

This induction result means that

\begin{aligned}I(\alpha = N) = N!,\end{aligned} \hspace{\stretch{1}}(1.0.24)

so we can use the large \alpha behaviour of this function to find approximations of the factorial. What does the innards of this integral (integrand) look like. We can plot these fig 7, and find a hump for any non-zero value of \alpha (\alpha = 0 is just a line)

Fig8: Some values of Stirling integrand

There’s a peak for large alpha that can be approximated by a Gaussian function. When \alpha is large enough then we can ignore the polynomial boundary effects. We want to look at where this integrand is peaked. We can write

\begin{aligned}I(\alpha) = \int_0^\infty dy e^{-y + \alpha \ln y} = \int_0^\infty dy f(y),\end{aligned} \hspace{\stretch{1}}(1.0.25)

and look for where f(y) is the largest. We’ve set

\begin{aligned}f(y) = -y + \alpha \ln y,\end{aligned} \hspace{\stretch{1}}(1.0.26)

and want to look at where

\begin{aligned}0 = {\left.{{f'(y)}}\right\vert}_{{y^{*}}}= -1 + \frac{\alpha}{y^{*}}\end{aligned} \hspace{\stretch{1}}(1.0.27)

so that the peak value

\begin{aligned}y^{*} = \alpha.\end{aligned} \hspace{\stretch{1}}(1.0.28)

We now want to expand the integrand around this peak value

\begin{aligned}I(\alpha) = \int_0^\infty\exp\left(f(y^{*}) + {\left.{{\frac{\partial {f}}{\partial {y}}}}\right\vert}_{{y^{*}}} (y - y^{*}) + \frac{1}{{2}}{\left.{{\frac{\partial^2 {{f}}}{\partial {{y}}^2}}}\right\vert}_{{y^{*}}}(y - y^{*})^2+ \cdots\right)\end{aligned} \hspace{\stretch{1}}(1.0.29)

We’ll drop all but the quadratic term, and first need the second derivative

\begin{aligned}f''(y) = \frac{d}{dy} \left(-1 + \alpha \frac{1}{{y}}\right)= -\alpha \frac{1}{{y^2}},\end{aligned} \hspace{\stretch{1}}(1.0.30)

at y = y^{*} = \alpha we have f''(y^{*}) = -\frac{1}{{\alpha}} and

\begin{aligned}I(\alpha \gg 1) \approx e^{f(y^{*})}\int_0^\infty\exp\left(\frac{1}{{2}}{\left.{{\frac{\partial^2 {{f}}}{\partial {{y}}^2}}}\right\vert}_{{y^{*}}}(y - y^{*})^2\right)=e^{f(\alpha)}\int_0^\infty dy e^{-\frac{(y - \alpha)^2}{2 \alpha}}\end{aligned} \hspace{\stretch{1}}(1.0.31)

For the integral Mathematica gives

\begin{aligned}\int_0^\infty dy e^{-\frac{(y - \alpha)^2}{2 \alpha}}=\sqrt{\frac{\pi \alpha }{2}} \left(\text{erf} \left(\sqrt{\frac{\alpha }{2}}\right)+1\right).\end{aligned} \hspace{\stretch{1}}(1.0.32)

From fig 8 observe that \text{erf}(x) \rightarrow 1 in the limit

So we have for large \alpha

\begin{aligned}\int_0^\infty dy e^{-\frac{(y - \alpha)^2}{2 \alpha}}\approx \sqrt{2 \pi \alpha}\end{aligned} \hspace{\stretch{1}}(1.0.33)

\begin{aligned}e^{f(\alpha)} = e^{-\alpha + \alpha \ln \alpha}\end{aligned} \hspace{\stretch{1}}(1.0.34)

We have for \alpha \gg 1

\begin{aligned}I(\alpha) \approx e^{-\alpha} e^{\alpha \ln \alpha} \sqrt{2 \pi \alpha}\end{aligned} \hspace{\stretch{1}}(1.0.35)

This gives us the Stirling approximation

\begin{aligned}\boxed{N! \approx \sqrt{ 2 \pi N} N^N e^{-N},}\end{aligned} \hspace{\stretch{1}}(1.0.36)

Posted in Math and Physics Learning. | Tagged: , , , , | Leave a Comment »