# Peeter Joot's (OLD) Blog.

• ## Archives

 Adam C Scott on avoiding gdb signal noise… Ken on Scotiabank iTrade RESP …… Alan Ball on Oops. Fixing a drill hole in P… Peeter Joot's B… on Stokes theorem in Geometric… Exploring Stokes The… on Stokes theorem in Geometric…

• 293,778

## Planck blackbody summation

Posted by peeterjoot on November 21, 2012

[Click here for a PDF of this post with nicer formatting and figures ]

# Motivation

Here’s a silly exercise. I’m so used to seeing imaginaries in $e^{\cdots \omega \cdots}$ expressions, when I looked at the famous blackbody summation for an exponentially decreasing probability distribution

\begin{aligned}\left\langle{{n_\omega}}\right\rangle &= \sum_{n = 0}^\infty n P(n) \\ &= \frac{\sum_{n = 0}^\infty n e^{-\hbar \omega n/ k T}}{\sum_{n = 0}^\infty e^{-\hbar \omega n/ k T}},\end{aligned} \hspace{\stretch{1}}(1.1.1)

I imagined (sic) an imaginary in the exponential and thought “how can that converge?”. I thought things must somehow magically work out if the limits are taken carefully, so I derived the finite summation expressions using the old tricks.

# Guts

If we want to sum a discrete power series, say

\begin{aligned}S_N(x) = 1 + x + x^2 + \cdots x^{N-1} = \sum_{n = 0}^{N-1} x^n,\end{aligned} \hspace{\stretch{1}}(1.2.3)

we have only to take the difference

\begin{aligned}x S_N - S_N = x^N - 1,\end{aligned} \hspace{\stretch{1}}(1.2.4)

so we have, regardless of the magnitude of $x$

\begin{aligned}\boxed{S_N(x) = \frac{1 - x^N}{1 - x}.}\end{aligned} \hspace{\stretch{1}}(1.2.5)

Observe that the derivative of $S_N$ is

\begin{aligned}\frac{dS_N}{dx} &= \sum_{n=1}^{N-1} n x^{n-1} \\ &= \frac{1}{{x}} \sum_{n=1}^{N-1} n x^n,\end{aligned} \hspace{\stretch{1}}(1.2.6)

but we also have

\begin{aligned}\frac{dS_N}{dx} &= S_N(x) \\ &= \frac{- N x^{N-1}}{1 - x} + \frac{1 - x^N}{(1 - x)^2} \\ &=\frac{1}{{(1-x)^2}} \left( -N x^{N-1} (1-x) + 1 - x^N \right) \\ &=\frac{1}{{(1-x)^2}} \left( -N x^{N-1} +N x^{N} + 1 - x^N \right) \\ &=\frac{1}{{(1-x)^2}} \left( 1 -N x^{N-1} +(N -1) x^{N} \right)\end{aligned} \hspace{\stretch{1}}(1.2.8)

We expect this and 1.2.6 to differ only by a constant. For 1.2.6, or $dS_N/dx = 1 + 2 x + 3 x^2 + \cdots$, we have $1$ at the origin, the same as 1.2.8. Our conclusion is

\begin{aligned}\boxed{\sum_{n=1}^{N-1} n x^n=\frac{x}{(1-x)^2} \left( 1 -N x^{N-1} +(N -1) x^{N} \right),}\end{aligned} \hspace{\stretch{1}}(1.2.13)

a result that applies, no matter the magnitude of $x$. Now we can form the Planck summation up to some discrete summation point (say $N-1$)

\begin{aligned}\frac{\sum_{n = 0}^{N-1} n e^{-\hbar \omega n/ k T}}{\sum_{n = 0}^{N-1} e^{-\hbar \omega n/ k T}}=\frac{x}{1-x} \left( 1 -N x^{N-1} +(N -1) x^{N} \right)\frac{1}{1 - x^N}\end{aligned} \hspace{\stretch{1}}(1.2.15)

I got this far and noticed there’s still an issue with $N \rightarrow \infty$. Taking a second look, I see that we have a plain old real exponential, something perhaps like \cref{fig:negativeExponentialPlot:negativeExponentialPlotFig1}.

\imageFigure{negativeExponentialPlotFig1}{Plot of $e^{-x/5}$}{fig:negativeExponentialPlot:negativeExponentialPlotFig1}{0.3}

It doesn’t really matter what the value of $\hbar \omega/k T$ is, it will be greater than zero, so that we have for our sum

\begin{aligned}\frac{\sum_{n = 0}^{\infty} n e^{-\hbar \omega n/ k T}}{\sum_{n = 0}^{\infty} e^{-\hbar \omega n/ k T}}&=\frac{e^{-\hbar \omega/k T}}{1-e^{-\hbar \omega/k T}} \\ &=\frac{1}{e^{\hbar \omega/k T} - 1},\end{aligned} \hspace{\stretch{1}}(1.2.15)

which is the Planck result.