Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Archive for May, 2013

Bernoulli polynomials and numbers and Euler-MacLauren summation

Posted by peeterjoot on May 29, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Motivation

In [1] I saw the Euler-summation formula casually used in a few places, allowing an approximation of a sum with derivatives at the origin. This rather powerful relationship was used in passing, and seemed like it was worth some exploration.

Bernoulli polynomials and numbers

Before tackling Euler summation, we first need to understand some properties of Bernoulli polynomials [], and Bernoulli numbers [2]. The properties of interest required for the derivation of the Euler summation formula appear to follow fairly easily with the following choice for the definition of the Bernoulli polynomials B_k(x) and Bernoulli numbers B_k

\begin{aligned}B_m(z) = \sum_{k = 0}^m \binom{m}{k} B_k z^{m - k}\end{aligned} \hspace{\stretch{1}}(1.0.1.1)

\begin{aligned}0 = \sum_{k = 0}^{m-1} \binom{m}{k} B_k \frac{1}{{m!}}, \qquad \mbox{m > 1}\end{aligned} \hspace{\stretch{1}}(1.0.1.1)

It is conventional to fix B_0 = 1. Eq. 1.0.1.1 provides an iterative method to calculate all the higher Bernoulli numbers. Without calculating the Bernoulli numbers explicitly, we can relate these to the values of the polynomials at the origin

\begin{aligned}\boxed{B_m(0) = B_m.}\end{aligned} \hspace{\stretch{1}}(1.0.2)

Now, let’s calculate the first few of these, to verify that we’ve got the conventions right. Starting with m = 2 we have

\begin{aligned}0 = \sum_{k = 0}^{1} \binom{2}{k} B_k \frac{1}{{2!}}= \frac{1}{{2!}}\left( B_0 + 2 B_1  \right),\end{aligned} \hspace{\stretch{1}}(1.0.2)

or B_1 = -1/2. Next with m = 3

\begin{aligned}0 &= \sum_{k = 0}^{2} \binom{3}{k} B_k \frac{1}{{3!}} \\ &= \frac{B_0}{6} + \frac{B_1}{2} + \frac{B_2}{2} \\ &= \frac{1}{{2}} \left( \frac{1}{{3}} -\frac{1}{{2}} + B_2  \right)\end{aligned} \hspace{\stretch{1}}(1.0.2)

or B_2 = 1/6. Thus the first few Bernoulli polynomials are

\begin{aligned}\begin{aligned}B_0(z) &= 1 \\ B_1(z) &= z - \frac{1}{{2}} \\ B_2(z) &= z^2 - z + \frac{1}{{6}}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.5a)

The Bernoulli polynomials have a simple relation to their derivative. Proceeding directly, taking derivatives we have

\begin{aligned}B_m'(z) &= \sum_{k = 0}^{m-1} (m - k)\binom{m}{k} B_k z^{m - k -1} \\ &= \sum_{k = 0}^{m-1} \frac{m!}{(m - k - 1)! k!} B_k z^{m - k -1} \\ &= m\sum_{k = 0}^{m-1} \frac{(m - 1)!}{(m - 1 - k)! k!} B_k z^{m - 1 - k},\end{aligned} \hspace{\stretch{1}}(1.0.5a)

or

\begin{aligned}\boxed{B_m'(z) = m B_{m-1}(z) }\end{aligned} \hspace{\stretch{1}}(1.0.7)

There’s a number of difference relations that the polynomials satisfy. The one that we need is

\begin{aligned}\boxed{B_m(z + 1) - B_m(z) = m z^{m -1}.}\end{aligned} \hspace{\stretch{1}}(1.0.8)

To prepare for demonstrating this difference in general, let’s perform this calculation for the specific cases of m = 1 and m = 3 to remove some of the index abstraction from the mix. For m = 1 we have

\begin{aligned}B_1(z + 1) - B_1(z) &= \sum_{k = 0}^1 \binom{1}{k} B_k \left(\left( z + 1 \right)^{1 - k}- z^{1 - k}\right) \\ &= B_0\left(\left( z + 1 \right)^1- z^1\right)+ 1B_1\left(\left( z + 1 \right)^0- z^0\right) \\ &= B_0 \\ &= 1.\end{aligned} \hspace{\stretch{1}}(1.0.8)

For m = 3 (a value of m > 1 that is representative) we have

\begin{aligned}B_3(z + 1) - B_3(z) &= \sum_{k = 0}^3 \binom{3}{k} B_k \left(\left( z + 1 \right)^{3 - k}- z^{3 - k}\right) \\ &= B_0\left(\left( z + 1 \right)^3- z^3\right)+ 3B_1\left(\left( z + 1 \right)^2- z^2\right)+ 3B_2\left(\left( z + 1 \right)^1- z^1\right)+ B_3\not{{\left(\left( z + 1 \right)^0- z^0\right)}} \\ &= B_0\left(3 z^2 + 3 z + 1\right)+ 3B_1(2 z + 1)+ 3B_2 \\ &= 3 z^2 + z^1 \left( 3 - 3  \right)+ z^0 \left( 1 - \frac{3}{2} + \frac{3}{6}  \right) \\ &= 3 z^2.\end{aligned} \hspace{\stretch{1}}(1.0.8)

Evaluating this in general, we see that the term with the highest order Bernoulli number is immediately killed, and we’ll have just one highest order monomial out of the mix. We expect all the remaining monomial terms to be killed term by term. That general difference is, for m \ge 2 is

\begin{aligned}B_m(z + 1) - B_m(z) &= \sum_{k = 0}^{m - 1}\binom{m}{k} B_k \left(\left( z + 1\right)^{m - k}- z^{m - k}\right) \\ &= \sum_{k = 0}^{m - 1}\binom{m}{k} B_k \sum_{s = 0}^{m - k - 1} \binom{m - k}{s} z^s= m! \sum_{s = 0}^{m - 1}\frac{z^s}{s!}\sum_{k = 0}^{m - s - 1} \frac{1}{\not{{(m -k)!}} k!} \frac{\not{{(m - k)!}}}{(m - k - s)!} B_k \\ &= \frac{m! }{(m -1)!} z^{m - 1}\sum_{k = 0}^{m - m + 1 - 1} \frac{1}{ k! (m - k - m + 1)!} B_k +m! \sum_{s = 0}^{m - 2}\frac{z^s}{s!}\sum_{k = 0}^{m - s - 1} \frac{1}{ k! (m - k - s)!} B_k \\ &= m z^{m - 1}+ m! \sum_{s = 0}^{m - 2}\frac{z^s}{s!}\left( \sum_{k = 0}^{(m-s) - 1} \binom{m - s}{s} B_k \frac{1}{{(m - s)!}}  \right).\end{aligned} \hspace{\stretch{1}}(1.0.8)

This last sum up to m -s -1 has the form of eq. 1.0.1.1, so is killed off. This proves eq. 1.0.8 as desired.

From this difference result we find for m > 1

\begin{aligned}B_m(1) &= \sum_{k = 0}^m \binom{m}{k} B_k \\ &= m! \sum_{k = 0}^{m-1} \binom{m}{k} B_k \frac{1}{{m!}}+B_m \\ &= B_m,\end{aligned} \hspace{\stretch{1}}(1.0.8)

and for m = 1

\begin{aligned}B_1(1) = 1 + B_1(0) = 1 - 1/2 = -B_1.\end{aligned} \hspace{\stretch{1}}(1.0.8)

we find that either of the end points in the [0, 1] interval provide us (up to a sign) with the Bernoulli numbers

\begin{aligned}\boxed{B_m(1) = \left\{\begin{array}{l l}B_m & \quad m > 1 \\ -B_1 & \quad m = 1 \end{array}\right.}\end{aligned} \hspace{\stretch{1}}(1.0.14)

Integrating eq. 1.0.7 after an m \rightarrow m + 1 substitution, and comparing to the difference equation, we have

\begin{aligned}(m + 1) z^m &= B_{m + 1}(z + 1) - B_{m + 1}(z) \\ &= (m + 1)\int_z^{z+1} B_m(z) dz,\end{aligned} \hspace{\stretch{1}}(1.0.15)

or

\begin{aligned}\boxed{\int_z^{z+1} B_m(z) dz = z^m.}\end{aligned} \hspace{\stretch{1}}(1.0.16)

Evaluating this at z = 0 shows that our polynomials are odd functions around the center of the [0, 1] interval, or

\begin{aligned}\boxed{\int_0^{1} B_m(z) dz = 0.}\end{aligned} \hspace{\stretch{1}}(1.0.17)

We also obtain Bernoulli’s sum of powers result

\begin{aligned}\int_0^n B_m(z) dz &= \int_0^1 B_m(z) dz+\int_1^2 B_m(z) dz+\cdots+\int_n^{n-1} B_m(z) dz \\ &= 0 + 1^m + 2^m + \cdots (n-1)^m,\end{aligned} \hspace{\stretch{1}}(1.0.17)

or

\begin{aligned}\boxed{\sum_{k = 1}^{n-1} k^m = \int_1^n B_m(z) dz.}\end{aligned} \hspace{\stretch{1}}(1.0.19)

We don’t need this result for the Euler summation formula, but it’s cool!

To arrive at some of these results I’ve followed, in part, portions of the approach outlined in []. That treatment however, starts by deriving some difference calculus results and uses associated generating functions for a more abstract difference equation related to the Bernoulli polynomials. In this summary of relationships above, I’ve attempted to avoid any requirement to first study the difference equation formalism (although that is also cool too, and not actually that difficult).

Euler-MacLauren summation

Following wikipedia [4], we utilize the simple boundary conditions for the Bernoulli polynomials in the [0, 1] interval. We can exploit these using integration by parts if we do a periodic extension of these polynomials in that interval.

Writing \lfloor {x} \rfloor for the largest integer less than or equal to x, our periodical extension of the [0, 1] interval Bernoulli polynomial is

\begin{aligned}P_m(x) = B_m\left( x = \lfloor {x} \rfloor  \right).\end{aligned} \hspace{\stretch{1}}(1.0.20)

From eq. 1.0.2 and eq. 1.0.14, our end points are

\begin{aligned}P_m(1) = \left\{\begin{array}{l l}B_m(0) = B_m & \quad m > 1 \\ -B_1(0) = -B_1 & \quad m = 1 \end{array}\right.\end{aligned} \hspace{\stretch{1}}(1.0.21)

Utilizing eq. 1.0.7 we can integrate by parts in a specific unit interval

\begin{aligned}\int_k^{k+1} f(x) dx &= \int_k^{k+1} f(x) P_0(x) dx \\ &= \int_k^{k+1} f(x) d \left( \frac{P_1(x)}{1}  \right) \\ &= {\left.{{\left( f(x) P_1(x)  \right)}}\right\vert}_{{k}}^{{k+1}}-\int_k^{k+1} f'(x) P_1(x) dx \\ &= - B_1 f(k+1) - B_1 f(k)-\int_k^{k+1} f'(x) P_1(x) \\ &= \frac{1}{{2}} \left( f(k+1) + f(k)  \right)-\int_k^{k+1} f'(x) P_1(x)\end{aligned} \hspace{\stretch{1}}(1.0.21)

Summing gives us

\begin{aligned}\int_0^{n} f(x) dx \\ &= \sum_{k = 0}^{n-1}\int_k^{k+1} f(x) dx \\ &= \frac{1}{{2}} f(0) + \sum_{k = 1}^{n-1} f(k) + f(n)-\int_0^{n} f'(x) P_1(x) dx,\end{aligned} \hspace{\stretch{1}}(1.0.21)

or

\begin{aligned}\sum_{k = 0}^{n} f(k)=\int_0^{n} f(x) dx+\frac{1}{{2}} \left( f(0) + f(n)  \right)-\int_0^{n} f'(x) P_1(x) dx.\end{aligned} \hspace{\stretch{1}}(1.0.26)

Continuing the integration by parts we have

\begin{aligned}\int_0^{n} f'(x) P_1(x) dx \\ &= \sum_{k = 0}^{n-1}\int_k^{k+1} f'(x) P_1(x) dx \\ &= \sum_{k = 0}^{n-1}\int_k^{k+1} f'(x) d \left( \frac{P_2(x)}{2}  \right) \\ &= \sum_{k = 0}^{n-1}\frac{B_2}{2} \left( f'(k+1) - f'(k)  \right)-\sum_{k = 0}^{n-1}\int_k^{k+1} f''(x) \frac{P_2(x)}{2} dx \\ &= \frac{B_2}{2} \left( f'(n) - f'(0)  \right)-\int_0^{n} f''(x) \frac{P_2(x)}{2} dx \\ &= \frac{B_2}{2} \left( f'(n) - f'(0)  \right)-\frac{B_3}{3!} \left( f''(n) - f''(0)  \right)+\int_0^{n} f'''(x) \frac{P_3(x)}{3!} dx \\ &= \sum_{s = 1}^{m}(-1)^{s-1}\frac{B_{s+1}}{(s+1)!} \left( f^s(n) - f^s(0)  \right)+(-1)^{m-1}\int_0^{n} f^m(x) \frac{P_m(x)}{m!} dx,\end{aligned} \hspace{\stretch{1}}(1.0.21)

or

\begin{aligned}\boxed{\begin{aligned}\sum_{k = 0}^{n} f(k)&=\int_0^{n} f(x) dx+\frac{1}{{2}} \left( f(0) + f(n)  \right)\\ &+\sum_{s = 1}^{m}(-1)^{s}\frac{B_{s+1}}{(s+1)!} \left( f^s(n) - f^s(0)  \right)+(-1)^{m}\int_0^{n} f^m(x) \frac{P_m(x)}{m!} dx.\end{aligned}}\end{aligned} \hspace{\stretch{1}}(1.0.26)

References

\bibitem[Behnke et al.(1974)Behnke, Gerike, and Gould]behnke1974fundamentalsV3Heinrich Behnke, Helmuth Gerike, and Sydney Henry Gould. Fundamentals of mathematics, volume 3. MIT Press, 1974.

[1] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

[2] Wikipedia. Bernoulli number — wikipedia, the free encyclopedia, 2013\natexlab{a}. URL http://en.wikipedia.org/w/index.php?title=Bernoulli_number&oldid=556109551. [Online; accessed 28-May-2013].

[3] Wikipedia. Bernoulli polynomials — wikipedia, the free encyclopedia, 2013\natexlab{b}. URL http://en.wikipedia.org/w/index.php?title=Bernoulli_polynomials&oldid=548729909. [Online; accessed 28-May-2013].

[4] Wikipedia. Euler-maclaurin formula — wikipedia, the free encyclopedia, 2013\natexlab{c}. URL http://en.wikipedia.org/w/index.php?title=Euler%E2%80%93Maclaurin_formula&oldid=552061467. [Online; accessed 28-May-2013].

Posted in Math and Physics Learning. | Tagged: , , | Leave a Comment »

Public service announcement: how to disable irritating flashing modal Lotus sametime chat windows.

Posted by peeterjoot on May 16, 2013

Lotus Notes/sametime has a spectacularly annoying default for their chat application that makes chat sessions modal by default.  Not only that, but they are both modal and flashing until you click on the window.

Somebody told me how to disable this brain dead “feature” on facebook, and I’m sharing it here.   You need to use File -> preferences -> sametime -> notifications, but once you are there what shows up is “Location awareness” :

Capture

You have to individually click on all the other options (like One-on-one) to actually disable the model and flashing nastiness.  For example:

Capture

Once this is done, then sametime windows hide in the background where they should be, until you actually get around to looking at them, if you ever choose to.

Posted in Development environment | Tagged: , , , | 2 Comments »

Bose gas specific heat above condensation temperature

Posted by peeterjoot on May 9, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Question: Bose gas specific heat above condensation temperature ([1] section 7.1.37)

Equation 7.1.33 provides a relation for specific heat

\begin{aligned}\frac{C_{\mathrm{V}}}{N k_{\mathrm{B}}} = \left(\frac{\partial {}}{\partial {T}}\left( \frac{3}{2} T \frac{ g_{5/2}(z) } { g_{3/2}(z) }  \right)\right)_v.\end{aligned} \hspace{\stretch{1}}(1.0.1)

Fill in the details showing how this can be used to find

\begin{aligned}\frac{C_{\mathrm{V}}}{N k_{\mathrm{B}}} = \frac{15}{4} \frac{ g_{5/2}(z) }{ g_{3/2}(z) }-\frac{9}{4} \frac{ g_{3/2}(z) }{ g_{1/2}(z) }.\end{aligned} \hspace{\stretch{1}}(1.0.2)

Answer

With

\begin{aligned}g_{{3/2}}(z) = \frac{\lambda^3}{v} = \frac{h^3}{\left( 2 \pi m k_{\mathrm{B}} T \right)^{3/2}}\end{aligned} \hspace{\stretch{1}}(1.0.3)

we have for constant v

\begin{aligned}\left({\partial {g_{3/2}}}/{\partial {T}}\right)_{{v}}= -\frac{3}{2}\frac{h^3}{\left( 2 \pi m k_{\mathrm{B}} \right)^{3/2} T^{5/2}}= -\frac{3}{2 T} g_{{3/2}}(z).\end{aligned} \hspace{\stretch{1}}(1.0.3)

From the series expansion

\begin{aligned}g_{{\nu}}(z) = \sum_{k = 1}^\infty \frac{z^k}{k^\nu},\end{aligned} \hspace{\stretch{1}}(1.0.5)

we have

\begin{aligned}z \frac{\partial {}}{\partial {z}} g_{{\nu}}(z) = z\sum_{k = 1}^\infty k \frac{z^{k-1}}{k^\nu}=\sum_{k = 1}^\infty \frac{z^{k}}{k^{\nu-1}}= g_{{\nu-1}}(z).\end{aligned} \hspace{\stretch{1}}(1.0.5)

Taken together we have

\begin{aligned}-\frac{3}{2 T} g_{{3/2}}(z) &=\left({\partial {g_{3/2}}}/{\partial {T}}\right)_{{v}} \\ &=\left({\partial {z}}/{\partial {T}}\right)_{{v}}\frac{\partial {}}{\partial {z}} g_{{3/2}}(z) \\ &=\frac{1}{{z}} \left({\partial {z}}/{\partial {T}}\right)_{{v}}z \frac{\partial {}}{\partial {z}} g_{{3/2}}(z) \\ &=\frac{1}{{z}} \left({\partial {z}}/{\partial {T}}\right)_{{v}}g_{{1/2}}(z),\end{aligned} \hspace{\stretch{1}}(1.0.5)

or

\begin{aligned}\frac{1}{{z}} \left({\partial {z}}/{\partial {T}}\right)_{{v}} = -\frac{3}{2 T} \frac{g_{{3/2}}(z)}{g_{{1/2}}(z)}.\end{aligned} \hspace{\stretch{1}}(1.0.5)

We are now ready to evaluate the derivative and find the specific heat

\begin{aligned}\frac{C_{\mathrm{V}}}{N k_{\mathrm{B}}} &= \left(\frac{\partial {}}{\partial {T}}\left( \frac{3}{2} T \frac{ g_{5/2}(z) } { g_{3/2}(z) }  \right)\right)_v \\ &=\frac{3}{2}  \frac{ g_{5/2}(z) }{ g_{3/2}(z) }+\frac{3 T}{2} \left({\partial {z}}/{\partial {T}}\right)_{{v}}\frac{\partial {}}{\partial {z}}\left( \frac{ g_{5/2}(z) } { g_{3/2}(z) }  \right) \\ &=\frac{3}{2}  \frac{ g_{5/2}(z) }{ g_{3/2}(z) }-\frac{9 T}{4} \frac{g_{{3/2}}(z)}{g_{{1/2}}(z)}z\frac{\partial {}}{\partial {z}}\left( \frac{ g_{5/2}(z) } { g_{3/2}(z) }  \right) \\ &=\frac{3}{2}  \frac{ g_{5/2}(z) }{ g_{3/2}(z) }-\frac{9 }{4} \frac{g_{{3/2}}(z)}{g_{{1/2}}(z)}\not{{\frac{ g_{3/2}(z) }{ g_{3/2}(z) }}}+\frac{9 }{4} \frac{\not{{g_{{3/2}}(z)}}}{\not{{g_{{1/2}}(z)}}}\frac{ g_{5/2}(z) \not{{g_{1/2}(z)}}}{ \left( g_{3/2}(z) \right)^{\not{{2}}} } \\ &=\frac{3}{2}  \frac{ g_{5/2}(z) }{ g_{3/2}(z) }-\frac{9 }{4} \frac{g_{{3/2}}(z)}{g_{{1/2}}(z)}+\frac{9 }{4} \frac{ g_{5/2}(z) }{ g_{3/2}(z) } \\ &=\frac{15}{4}  \frac{ g_{5/2}(z) }{ g_{3/2}(z) }-\frac{9 }{4} \frac{g_{{3/2}}(z)}{g_{{1/2}}(z)}.\end{aligned} \hspace{\stretch{1}}(1.0.5)

This is the desired result.

References

[1] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

Posted in Math and Physics Learning. | Tagged: , , , , | Leave a Comment »

A dumb expansion of the Fermi-Dirac grand partition function

Posted by peeterjoot on May 9, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

In section 6.2 [1] we have the following notation for the sums in the grand partition function \Omega. Note that I’ve switched notations from Z_G as used in class to \Omega as used on our final exam. The text uses a script Q like \mathcal{Q} but with the loop much more disconnected and hard to interpret.

\begin{aligned}\Omega = \sum_{N = 0}^\infty z^N Q_N(V, T)\end{aligned} \hspace{\stretch{1}}(1.0.1a)

\begin{aligned}Q_N(V, T) = {\sum_{\{n_\epsilon\}}}' e^{-\beta \sum_\epsilon n_\epsilon \epsilon}.\end{aligned} \hspace{\stretch{1}}(1.0.1b)

This was shorthand notation for the canonical ensemble, subject to constraints on N and E

\begin{aligned}Q_N(V, T) = \sum_E e^{-\beta E}\end{aligned} \hspace{\stretch{1}}(1.0.2a)

\begin{aligned}E = \sum_\epsilon n_\epsilon \epsilon\end{aligned} \hspace{\stretch{1}}(1.0.2b)

\begin{aligned}N = \sum_\epsilon n_\epsilon.\end{aligned} \hspace{\stretch{1}}(1.0.2c)

I found this notation pretty confusing, since the normal conventions about what is a dummy index in the various summations do not hold.

The claim of the text (and in class) is that we could write out the grand canonical partition function as

\begin{aligned}\Omega = \left(\sum_{n_0}\left( z e^{-\beta \epsilon_0}  \right)^{n_0} \right)\left(\sum_{n_1}\left( z e^{-\beta \epsilon_1}  \right)^{n_1} \right)\cdots\end{aligned} \hspace{\stretch{1}}(1.0.5)

Let’s verify this for a Fermi-Dirac distribution by dispensing with the notational tricks and writing out the original specification of the grand canonical partition function in long form, and compare that to the first few terms of the expansion of eq. 1.0.5.

Let’s consider a specific value of E, namely all those values of E that apply to N = 3. Note that we have n_\epsilon \in \{0, 1\} only for a Fermi-Dirac sysstem, so this means we can have values of E like

\begin{aligned}E \in \{ \epsilon_0 + \epsilon_1 + \epsilon_2, \epsilon_0 + \epsilon_3 + \epsilon_7, \epsilon_2 + \epsilon_6 + \epsilon_{11}, \cdots\}\end{aligned} \hspace{\stretch{1}}(1.0.4)

Our grand canonical partition function, when written out explicitly, will have the form

\begin{aligned}\Omega = z^0 e^{-0}+ z^1 \sum_{\epsilon_k} e^{-\beta \epsilon_k}+ z^2 \sum_{\epsilon_k, \epsilon_m} e^{-\beta (\epsilon_k + \epsilon_m) }+ z^3 \sum_{\epsilon_r, \epsilon_s, \epsilon_t} e^{-\beta (\epsilon_r + \epsilon_s + \epsilon_t) }+ \cdots\end{aligned} \hspace{\stretch{1}}(1.0.5)

Okay, that’s simple enough and really what the primed notation is getting at. Now let’s verify that after simplification this matches up with eq. 1.0.5. Expanding this out a bit we have

\begin{aligned}\Omega &= \left(\sum_{n_0 = 0}^1\left( z e^{-\beta \epsilon_0}  \right)^{n_0} \right)\left(\sum_{n_1 = 0}^1\left( z e^{-\beta \epsilon_1}  \right)^{n_1} \right)\cdots \\ &= \left(1 + z e^{-\beta \epsilon_0} \right)\left(1 + z e^{-\beta \epsilon_1} \right)\left(1 + z e^{-\beta \epsilon_2} \right)\cdots \\ &= \left(1 + z e^{-\beta \epsilon_0} +z e^{-\beta \epsilon_1} +z e^{-\beta (\epsilon_0 + \epsilon_1)} \right)\left(1 + z e^{-\beta \epsilon_2} +z e^{-\beta \epsilon_3} +z^2 e^{-\beta (\epsilon_2 + \epsilon_3)} \right)\left(1 + z e^{-\beta \epsilon_4} \right)\cdots \\ &= \Bigl(1 + z \left(e^{-\beta \epsilon_0} +e^{-\beta \epsilon_1} +e^{-\beta \epsilon_2} +e^{-\beta \epsilon_3} \right) \\ &+\qquad z^2 \left(e^{-\beta (\epsilon_0 + \epsilon_1)} +e^{-\beta (\epsilon_0 + \epsilon_2)} +e^{-\beta (\epsilon_0 + \epsilon_3)} +e^{-\beta (\epsilon_1 + \epsilon_2)} +e^{-\beta (\epsilon_1 + \epsilon_3)} +e^{-\beta (\epsilon_2 + \epsilon_3)} \right) \\ &+ \qquad z^3\left(e^{-\beta (\epsilon_0 + \epsilon_1 + \epsilon_2)} + e^{-\beta (\epsilon_0 + \epsilon_1 + \epsilon_3)} + e^{-\beta (\epsilon_0 + \epsilon_2 + \epsilon_3)} + e^{-\beta (\epsilon_1 + \epsilon_2 + \epsilon_3)} \right)\Bigr)\left(1 + z e^{-\beta \epsilon_4} \right)\cdots\end{aligned} \hspace{\stretch{1}}(1.0.5)

This completes the verification of the result as expected. It is definitely a brute force way of doing so, but easy to understand and I found for myself that it removed some of the notation that obfuscated what is really a simple statement.

Once we are comfortable with this Fermi-Dirac expression of the grand canonical partition function, we can then write it in the product form that leads to the sum that we want after taking logs

\begin{aligned}\Omega &= \left(1 + z e^{-\beta \epsilon_0} \right)\left(1 + z e^{-\beta \epsilon_1} \right)\left(1 + z e^{-\beta \epsilon_2} \right)\cdots \\ &=\prod_\epsilon\left(1 + z e^{-\beta \epsilon} \right).\end{aligned} \hspace{\stretch{1}}(1.0.7)

References

[1] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

Posted in Math and Physics Learning. | Tagged: , , , | Leave a Comment »

Project gutenberg has “Calculus Made Easy” by Silvanus P. Thompson

Posted by peeterjoot on May 4, 2013

One of my favorite books [1], a great little book that my grandfather gave me, is now available on project gutenburg (free ebooks transcribed from old out of print material). Check out their Mathematics Bookshelf.

I’d seen this book recently in the Markham public library. It’s been republished with additions, but I didn’t feel the new author added much value.

It’s interesting to see that this project also makes the tex sources available. Because of that I can include the awesome prologue and first chapter from this text in this post. Check it out. Doesn’t it whet your appetite for more calculus?

Prologue

Considering how many fools can calculate, it is
surprising that it should be thought either a difficult
or a tedious task for any other fool to learn how to
master the same tricks.

Some calculus-tricks are quite easy. Some are
enormously difficult. The fools who write the textbooks
of advanced mathematics—and they are mostly
clever fools—seldom take the trouble to show you how
easy the easy calculations are. On the contrary, they
seem to desire to impress you with their tremendous
cleverness by going about it in the most difficult way.

Being myself a remarkably stupid fellow, I have
had to unteach myself the difficulties, and now beg
to present to my fellow fools the parts that are not
hard. Master these thoroughly, and the rest will
follow. What one fool can do, another can.

To deliver you from the Preliminary Terrors

The preliminary terror, which chokes off most fifth-form
boys from even attempting to learn how to
calculate, can be abolished once for all by simply stating
what is the meaning—in common-sense terms—of the
two principal symbols that are used in calculating.

These dreadful symbols are:

(1) d which merely means “a little bit of.”

Thus dx means a little bit of x; or du means a
little bit of u. Ordinary mathematicians think it
more polite to say “an element of,” instead of “a little
bit of.” Just as you please. But you will find that
these little bits (or elements) may be considered to be
indefinitely small.

(2) \int which is merely a long S, and may be called
(if you like) “the sum of.”

Thus \int dx means the sum of all the little bits
of x; or \int dt means the sum of all the little bits
of t. Ordinary mathematicians call this symbol “the

integral of.” Now any fool can see that if x is
considered as made up of a lot of little bits, each of
which is called dx, if you add them all up together
you get the sum of all the dx‘s, (which is the same
thing as the whole of x). The word “integral” simply
means “the whole.” If you think of the duration
of time for one hour, you may (if you like) think of
it as cut up into 3600 little bits called seconds. The
whole of the 3600 little bits added up together make
one hour.

When you see an expression that begins with this
terrifying symbol, you will henceforth know that it
is put there merely to give you instructions that you
are now to perform the operation (if you can) of
totalling up all the little bits that are indicated by
the symbols that follow.

That’s all.

References

[1] Silvanus P Thompson. Calculus made easy. Macmillian, 1914. URL http://www.gutenberg.org/files/33283/33283-pdf.pdf.

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »