Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Archive for January, 2013

A compilation of notes, so far, for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti

Posted by peeterjoot on January 29, 2013

I’ve collected my notes, so far, for the UofT ‘PHY452H1S Basic Statistical Mechanics’ course, Taught by Prof. Arun Paramekanti, that I’m currently taking.

This includes all of the following (no further updates will be made to any of these) :

January 29, 2013 Poisson distribution from binomial using Stirling’s approximation

January 29, 2013 Volumes in phase space

January 27, 2013 hypersphere volume calculation the easy way

January 23, 2013 Motion in phase space. Liouville and Poincar’e theorems.

January 20, 2013 Generating functions and diffusion

January 17, 2013 Maxwell distribution for gases

January 16, 2013 Random walk

January 13, 2013 Problem Set 1: Binomial distributions

January 11, 2013 Probability

January 10, 2013 What is statististical mechanics and equilibrium

January 10, 2013 Some problems from Kittel Thermal Physics, chapter II

December 24, 2012 Stirling like approximation

Dec 27, 2011 Pondering a question on conditional probability.

Advertisements

Posted in Math and Physics Learning. | 1 Comment »

PHY452H1S Basic Statistical Mechanics. Lecture 6: Volumes in phase space. Taught by Prof. Arun Paramekanti

Posted by peeterjoot on January 29, 2013

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. May not be entirely coherent.

Liouville’s theorem

We’ve looked at the continuity equation of phase space density

\begin{aligned}0 = \frac{\partial {\rho}}{\partial {t}} + \sum_{i_\alpha} \left(\frac{\partial {}}{\partial {p_{i_\alpha}}} \left( \rho \dot{p}_{i_\alpha} \right) + \frac{\partial {\left( \rho \dot{x}_{i_\alpha} \right) }}{\partial {x_{i_\alpha}}}\right)\end{aligned} \hspace{\stretch{1}}(1.2.1)

which with

\begin{aligned}\frac{\partial {\dot{p}_{i_\alpha}}}{\partial {p_{i_\alpha}}} + \frac{\partial {\dot{x}_{i_\alpha}}}{\partial {x_{i_\alpha}}} = 0\end{aligned} \hspace{\stretch{1}}(1.2.2)

led us to Liouville’s theorem

\begin{aligned}\\ boxed{\frac{d{{\rho}}}{dt}(x, p, t) = 0}.\end{aligned} \hspace{\stretch{1}}(1.2.3)

We define Ergodic, meaning that with time, as you wait for t \rightarrow \infty, all available phase space will be covered. Not all systems are necessarily ergodic, but the hope is that all sufficiently complicated systems will be so.

We hope that

\begin{aligned}\rho(x, p, t \rightarrow \infty) \implies \frac{\partial {\rho}}{\partial {t}} = 0 \qquad \mbox{in steady state}\end{aligned} \hspace{\stretch{1}}(1.2.4)

In particular for \rho = \text{constant}, we see that our continuity equation 1.2.1 results in 1.2.2.

For example in a SHO system with a cyclic phase space, as in (Fig 1).

Fig 1: Phase space volume trajectory

 

\begin{aligned}\left\langle{{A}}\right\rangle = \frac{1}{{\tau}} \int_0^\tau dt A( x_0(t), p_0(t) ),\end{aligned} \hspace{\stretch{1}}(1.2.5)

or equivalently with an ensemble average, imagining that we are averaging over a number of different systems

\begin{aligned}\left\langle{{A}}\right\rangle = \frac{1}{{\tau}} \int dx dp A( x, p ) \underbrace{\rho(x, p)}_{\text{constant}}\end{aligned} \hspace{\stretch{1}}(1.2.6)

If we say that

\begin{aligned}\rho(x, p) = \text{constant} = \frac{1}{{\Omega}},\end{aligned} \hspace{\stretch{1}}(1.2.7)

so that

\begin{aligned}\left\langle{{A}}\right\rangle = \frac{1}{{\Omega}} \int dx dp A( x, p ) \end{aligned} \hspace{\stretch{1}}(1.2.8)

then what is this constant. We fix this by the constraint

\begin{aligned}\int dx dp \rho(x, p) = 1\end{aligned} \hspace{\stretch{1}}(1.2.9)

So, \Omega is the allowed “volume” of phase space, the number of states that the system can take that is consistent with conservation of energy.

What’s the probability for a given configuration. We’ll have to enumerate all the possible configurations. For a coin toss example, we can also ask how many configurations exist where the sum of “coin tosses” are fixed.

A worked example: Ideal gas calculation of \Omega

  • N gas atoms at phase space points \mathbf{x}_i, \mathbf{p}_i
  • constrained to volume V
  • Energy fixed at E.

\begin{aligned}\Omega(N, V, E) = \int_V d\mathbf{x}_1 d\mathbf{x}_2 \cdots d\mathbf{x}_N \int d\mathbf{p}_1 d\mathbf{p}_2 \cdots d\mathbf{p}_N \delta \left(E - \frac{\mathbf{p}_1^2}{2m}- \frac{\mathbf{p}_2^2}{2m}\cdots- \frac{\mathbf{p}_N^2}{2m}\right)=\underbrace{V^N}_{\text{Real space volume, not N dimensional ``volume''}} \int d\mathbf{p}_1 d\mathbf{p}_2 \cdots d\mathbf{p}_N \delta \left(E - \frac{\mathbf{p}_1^2}{2m}- \frac{\mathbf{p}_2^2}{2m}\cdots- \frac{\mathbf{p}_N^2}{2m}\right)\end{aligned} \hspace{\stretch{1}}(1.10)

With \gamma defined implicitly by

\begin{aligned}\frac{d\gamma}{dE} = \Omega\end{aligned} \hspace{\stretch{1}}(1.3.11)

so that with Heavyside theta as in (Fig 2).

\begin{aligned}\Theta(x) = \left\{\begin{array}{l l}1 & \quad x \ge 0 \\ 0 & \quad x < 0\end{array}\right.\end{aligned} \hspace{\stretch{1}}(1.0.12a)

\begin{aligned}\frac{d\Theta}{dx} = \delta(x),\end{aligned} \hspace{\stretch{1}}(1.0.12b)

Fig 2: Heavyside theta

 

we have

\begin{aligned}\gamma(N, V, E) = V^N \int d\mathbf{p}_1 d\mathbf{p}_2 \cdots d\mathbf{p}_N \Theta \left(E - \sum_i \frac{\mathbf{p}_i^2}{2m}\right)\end{aligned} \hspace{\stretch{1}}(1.0.13)

In three dimensions (p_x, p_y, p_z), the dimension of momentum part of the phase space is 3. In general the dimension of the space is 3N. Here

\begin{aligned}\int d\mathbf{p}_1 d\mathbf{p}_2 \cdots d\mathbf{p}_N \Theta \left(E - \sum_i \frac{\mathbf{p}_i^2}{2m}\right),\end{aligned} \hspace{\stretch{1}}(1.0.14)

is the volume of a “sphere” in 3N– dimensions, which we found in the problem set to be

\begin{aligned}V_{m} = \frac{ \pi^{m/2} R^{m} }{ \Gamma\left( m/2 + 1 \right)}.\end{aligned} \hspace{\stretch{1}}(1.0.15a)

\begin{aligned}\Gamma(x) = \int_0^\infty dy e^{-y} y^{x-1}\end{aligned} \hspace{\stretch{1}}(1.0.15b)

\begin{aligned}\Gamma(x + 1) = x \Gamma(x) = x!\end{aligned} \hspace{\stretch{1}}(1.0.15c)

Since we have

\begin{aligned}\mathbf{p}_1^2 + \cdots \mathbf{p}_N^2 \le 2 m E\end{aligned} \hspace{\stretch{1}}(1.0.16)

the radius is

\begin{aligned}\text{radius} = \sqrt{ 2 m E}.\end{aligned} \hspace{\stretch{1}}(1.0.17)

This gives

\begin{aligned}\gamma(N, V, E) = V^N \frac{ \pi^{3 N/2} ( 2 m E)^{3 N/2}}{\Gamma( 3N/2 + 1) }= V^N \frac{2}{3N} \frac{ \pi^{3 N/2} ( 2 m E)^{3 N/2}}{\Gamma( 3N/2 ) },\end{aligned} \hspace{\stretch{1}}(1.0.17)

and

\begin{aligned}\Omega(N, V, E) = V^N \pi^{3 N/2} ( 2 m E)^{3 N/2 - 1} \frac{2 m}{\Gamma( 3N/2 ) }\end{aligned} \hspace{\stretch{1}}(1.0.19)

This result is almost correct, and we have to correct in 2 ways. We have to fix the counting since we need an assumption that all the particles are indistinguishable.

  • Indistinguishability. We must divide by N!.
  • \Omega is not dimensionless. We need to divide by h^{3N}, where h is Plank’s constant.

In the real world we have to consider this as a quantum mechanical system. Imagine a two dimensional phase space. The allowed points are illustrated in (Fig 3).

Fig 3: Phase space volume adjustment for the uncertainty principle

 

Since \Delta x \Delta p \sim \hbar, the question of how many boxes there are, we calculate the total volume, and then divide by the volume of each box. This sort of handwaving wouldn’t be required if we did a proper quantum mechanical treatment.

The corrected result is

\begin{aligned}\boxed{\Omega_{\mathrm{correct}} = \frac{V^N}{N!} \frac{1}{{h^{3N}}} \frac{( 2 \pi m E)^{3 N/2 }}{E} \frac{1}{\Gamma( 3N/2 ) }}\end{aligned} \hspace{\stretch{1}}(1.0.20)

To come

We’ll look at entropy

\begin{aligned}\underbrace{S}_{\text{Entropy}} = \underbrace{k_{\mathrm{B}}}_{\text{Boltzmann's constant}} \ln \underbrace{\Omega_{\mathrm{correct}}}_{\text{phase space volume (number of configurations)}}\end{aligned} \hspace{\stretch{1}}(1.0.21)

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , | Leave a Comment »

Poisson distribution from binomial using Stirling’s approximation

Posted by peeterjoot on January 29, 2013

[Click here for a PDF of this post with nicer formatting]

Question: Large N approximation of binomial distribution

In section 11.1 [1] it is stated that the binomial distribution

\begin{aligned}\rho(n) = \binom{N}{n} p^n (1 - p)^{N-n},\end{aligned} \hspace{\stretch{1}}(1.0.1)

has the large N approximation of a Poisson distribution

\begin{aligned}\rho(n) = \frac{(\alpha v)^n}{n!} e^{-\alpha v},\end{aligned} \hspace{\stretch{1}}(1.0.2)

where N/V = \alpha is the density and p = v/V, the probability (of finding the particle in a volume v of a total volume V in this case).

Show this.

Answer

{

This is another Stirling’s approximation problem. With p = v \alpha/N, and working with log expansion of the N! and (N-n)! terms of the binomial coefficient we have

\begin{aligned}\ln \rho &\approx{\left({ N + \frac{1}{{2}} }\right)} \ln N - \not{{N}} - {\left({ N - n + \frac{1}{{2}} }\right)} \ln (N - n) + (\not{{N}} - n) - \ln n! + n \ln p + (N - n) \ln (1 - p) \\ &= {\left({ N + \frac{1}{{2}} }\right)} \ln N - {\left({ N - n + \frac{1}{{2}} }\right)} \ln (N - n) - n - \ln n! + n \ln \frac{v \alpha}{N} + (N - n) \ln {\left({1 - \frac{v \alpha}{N}}\right)} \\ &= {\left({ N + \frac{1}{{2}} - N + n - \frac{1}{{2}} -n }\right)} \ln N - {\left({ N - n + \frac{1}{{2}} }\right)} \ln {\left({1 - \frac{n}{N}}\right)} - n - \ln n! + n \ln v \alpha + (N - n) \ln {\left({1 - \frac{v \alpha}{N}}\right)} \\ &\approx\ln \frac{(v\alpha)^n}{n!} - n- {\left({ N - n + \frac{1}{{2}} }\right)} {\left({- \frac{n}{N} - \frac{1}{{2}}\frac{n^2}{N^2} }\right)} + (N - n) {\left({- \frac{v \alpha}{N} - \frac{1}{{2}} \frac{v^2 \alpha^2}{N^2}}\right)} \\ &= \ln \frac{(v\alpha)^n}{n!} - n+ n {\left({ 1 - \frac{n}{N} + \frac{1}{{2N}} }\right)} {\left({1 + \frac{1}{{2}}\frac{n}{N} }\right)} - v\alpha {\left({1 - \frac{n}{N} }\right)} {\left({1 + \frac{1}{{2}} \frac{v \alpha}{N}}\right)} \\ &\approx \ln \frac{(v\alpha)^n}{n!} - n+ n - v\alpha\end{aligned} \hspace{\stretch{1}}(1.0.3)

Here all the 1/N terms have been dropped, and we are left with

\begin{aligned}\ln \rho \approx \ln \frac{(v\alpha)^n}{n!} - v\alpha,\end{aligned} \hspace{\stretch{1}}(1.0.4)

which is the logarithm of the Poisson distribution as desired.

References

[1] S.K. Ma. Statistical Mechanics. World Scientific, 1985. ISBN 9789971966072. URL http://books.google.ca/books?id=3XyA0SYK4XIC.

Posted in Math and Physics Learning. | Tagged: , | Leave a Comment »

Hypersphere volume calculation the easy way

Posted by peeterjoot on January 27, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Motivation

In problem set three I solved the hypersphere volume problem the hard way. I did it the way that I thought was obvious, starting with the spherical coordinate hypervolume element for an Euclidean space. That build on the volume element calculation I’d previously done for four dimensional Euclidean and Hyperbolic spaces in [1]. This time I avoided any use of Geometric Algebra and wrote out the volume element directly using a Jacobian transformation instead of a wedge product (that leads to the same Jacobian). I then proceeded to integrate the volume element, come up with a recurrence relation for the volume, solve that for even and odd powers, and find a common expression using Gamma functions that was correct for even and odd powers. It was a labourious process, but satisfying since I’d previously tried this calculation and not succeeded.

As and after I did this calculation, which I’ll post later, I figured there had to have been an easier way. In the midterm prep reading of section 5.5 of [2] I found just that method. It’s done there in about six lines, using a trick that seemed really sneaky! I was left with a frustrated feeling, wondering how on earth somebody figured out to do it that way.

After a bit of reflection, I see that the trick is a very sensible approach. I’ll outline that here for 2 and 3 dimensions to show the technique, and the reader can generalize if desired.

The trick

Switching to spherical or circular coordinates when there is radial symmetry isn’t anything that we could describe as trickery. For example, with r^2 = x^2 + y^2 in a two dimensional problem or r^2 = x^2 + y^2 + z^2 in a three dimensional problem, it’s clear that we’d do the following respectively if we were evaluating an integral

\begin{aligned}\iint f(r) dx dy = \iint f(r) r dr d\theta = 2 \pi \int f(r) r dr\end{aligned} \hspace{\stretch{1}}(1.0.5a)

\begin{aligned}\iiint f(r) dx dy = \iiint f(r) r^2 dr \sin d\theta d\theta d\phi = 4 \pi \int f(r) r^2 dr\end{aligned} \hspace{\stretch{1}}(1.0.5b)

In fact, for f(r) = 1 these give us the area and volume of the circle and sphere respectively.

So, what’s the trick? The first part is the observation that the “area” of a “volume” for the circle and the sphere are found by the derivatives of the “volumes”

\begin{aligned}\frac{d}{dr} \pi r^2 = 2 \pi r\end{aligned} \hspace{\stretch{1}}(1.0.2a)

\begin{aligned}\frac{d}{dr} \frac{4 \pi}{3} r^3 = 4 \pi r^2.\end{aligned} \hspace{\stretch{1}}(1.0.2b)

I recall being suprised that this is the case back in high school calculus. When I lost marks for not having the formula for the surface area of a sphere memorized on some test, my calculus teacher told me this title tidbit.

Back then this wasn’t obvious to me, and I complained that it was true only for the perfect symmetry of a sphere. For example, the derivative of an volume of a cube doens’t give the surface area of a cube (d x^3/dx = 3 x^2 \ne 6 x^2).

Once we believe or assume that the surface “area” of a hypervolume is the derivative of the volume, we can proceed with the trick. That trick is to express the volume in terms of an unknown constant. For example, for the circle and sphere the generalized “volume”s are respectively

\begin{aligned}V_2 = B r^2\end{aligned} \hspace{\stretch{1}}(1.0.3a)

\begin{aligned}V_3 = C r^3\end{aligned} \hspace{\stretch{1}}(1.0.3b)

The perimeter of the circle and surface area of the sphere are then

\begin{aligned}A_2 = 2 B r\end{aligned} \hspace{\stretch{1}}(1.0.4a)

\begin{aligned}A_3 = 3 C r^2\end{aligned} \hspace{\stretch{1}}(1.0.4b)

So, if we want to calculate integrals of the form 1.0.1.1 we can write

\begin{aligned}\iint f(r) dx dy = 2 B \int f(r) r dr\end{aligned} \hspace{\stretch{1}}(1.0.5a)

\begin{aligned}\iiint f(r) dx dy = 3 C \int f(r) r^2 dr.\end{aligned} \hspace{\stretch{1}}(1.0.5b)

The essence of the trick is to do such an integral in both Cartesian coordinates to get the left hand sides, and then do the radial right hand side integrals. Comparing these provides the constants B and C and thus completes the “volume” formulas for the circle and sphere.

The function chosen for this integral in the text was a Gaussian exponential f(r) = e^{-r^2/2}, something that is strictly radial, and can be integrated over all space. For the 2D case, we’ll integrate

\begin{aligned}\iint e^{-(x^2 +y^2)/2} dx dy = (\sqrt{2 \pi})^2= 2 B \int_0^\infty dr r e^{-r^2/2}= -2 B {\left.{e^{-r^2/2}}\right\vert}_{0}^{\infty} = 2 B.\end{aligned} \hspace{\stretch{1}}(1.0.5b)

We find that B = \pi, so the 2D spherical “volume” (the area of the circle) is V = \pi r^2.

For the 3D sphere, we have

\begin{aligned}\iiint e^{-(x^2 +y^2 + z^2)/2} dx dy = (\sqrt{2 \pi})^3= 3 C \int_0^\infty dr r^2 e^{-r^2/2}= 3 C \int_0^\infty dr e^{-r^2/2}= 3 C \sqrt{2 \pi}.\end{aligned} \hspace{\stretch{1}}(1.0.5b)

So we have for the volume of the 3D sphere, V = 4 \pi r^3/3 as expected. The same idea can be extended to higher dimensional spheres. The text does the even values of N. Treating both even and odd values, I’d expect this to yield the result I calculated with the Jacobian volume element method

\begin{aligned}V_{m} = \frac{ \pi^{m/2} R^{m} }{   \Gamma\left( m/2 + 1 \right)}.\end{aligned} \hspace{\stretch{1}}(1.0.5b)

References

[1] Peeter Joot. Exploring physics with Geometric Algebra, chapter {Spherical and hyperspherical parametrization}. URL http://sites.google.com/site/peeterjoot/math2009/gabook.pdf.

[2] S.K. Ma. Statistical Mechanics. World Scientific, 1985. ISBN 9789971966072. URL http://books.google.ca/books?id=3XyA0SYK4XIC.

Posted in Math and Physics Learning. | Tagged: , , , , , , | 3 Comments »

PHY452H1S Basic Statistical Mechanics. Problem Set 2: Generating functions and diffusion

Posted by peeterjoot on January 26, 2013

[Click here for a PDF of this post with nicer formatting]

Disclaimer

This is an ungraded set of answers to the problems posed.

Question: Diffusion

The usual diffusion equation for the probability density in one dimension is given by

\begin{aligned}\frac{\partial {P}}{\partial {t}}(x, t) = D \frac{\partial^2 {{P}}}{\partial {{x}}^2}(x, t)\end{aligned} \hspace{\stretch{1}}(1.0.1)

where D is the diffusion constant. Define the Fourier components of the probability distribution via

\begin{aligned}P(x, t) = \int_{-\infty}^\infty \frac{dk}{2 \pi} \tilde{P}(k, t) \exp\left( i k x \right)\end{aligned} \hspace{\stretch{1}}(1.0.2a)

\begin{aligned}\tilde{P}(k, t) = \int_{-\infty}^\infty dx P(x, t) \exp\left( -i k x \right)\end{aligned} \hspace{\stretch{1}}(1.0.2b)

This is useful since the diffusion equation is linear in the probability and each Fourier component will evolve independently. Using this, solve the diffusion equation to obtain P(k,t) in Fourier space given the initial \tilde{P}(k,0).

Assuming an initial Gaussian profile

\begin{aligned}P(x, 0) = \frac{1}{{\sqrt{2 \pi \sigma^2}}} \exp\left(-\frac{x^2}{2 \sigma^2}\right),\end{aligned} \hspace{\stretch{1}}(1.3)

obtain the probability density P(x,t) at a later time t. (NB: Fourier transform, get the solution, transform back.) Schematically plot the profile at the initial time and a later time.

A small modulation on top of a uniform value

Let the probability density be proportional to

\begin{aligned}\frac{1}{{L}} + A \sin(k_0 x)\end{aligned} \hspace{\stretch{1}}(1.4)

at an initial time t = 0. Assume this is in a box of large size L, but ignore boundary effects except to note that it will help to normalize the constant piece, assuming the oscillating piece integrates to zero. Also note that we have
to assume A < 1/L to ensure that the probability density is positive. Obtain P(x,t) at a later time t. Roughly how long does the modulation take to decay away? Schematically plot the profile at the initial time and a later time.

Answer

Inserting the transform definitions we have

\begin{aligned}0 &= \left( \frac{\partial {}}{\partial {t}} - D \frac{\partial^2 {{}}}{\partial {{x}}^2} \right) P \\ &= \left( \frac{\partial {}}{\partial {t}} - D \frac{\partial^2 {{}}}{\partial {{x}}^2} \right) \int_{-\infty}^\infty \frac{dk}{2 \pi} \tilde{P}(k, t) \exp\left( i k x \right) \\ &=\int_{-\infty}^\infty \frac{dk}{2 \pi} \left(\frac{\partial {}}{\partial {t}} \tilde{P}(k, t) + k^2 D\tilde{P}(k, t) \right)\exp\left( i k x \right),\end{aligned} \hspace{\stretch{1}}(1.0.5)

We conclude that

\begin{aligned}0 = \tilde{P}(k, t) + k^2 D\tilde{P}(k, t),\end{aligned} \hspace{\stretch{1}}(1.0.6)

or

\begin{aligned}\tilde{P}(k, t) = A(k) e^{-k^2 D t}.\end{aligned} \hspace{\stretch{1}}(1.0.7)

If the Fourier transform of the distribution is constant until time t, so that \tilde{P}(k, t < 0) = \tilde{P}(k, 0), we can write

\begin{aligned}\boxed{\tilde{P}(k, t) = \tilde{P}(k, 0) e^{-k^2 D t}.}\end{aligned} \hspace{\stretch{1}}(1.0.8)

The time evolution of the distributions transform just requires multiplication by the decreasing exponential factor e^{-k^2 D t}.

Propagator for the diffusion equation

We can also use this to express the explicit time evolution of the distribution

\begin{aligned}P(x, t) &= \int_{-\infty}^\infty \frac{dk}{2 \pi} \tilde{P}(k, 0) e^{-k^2 D t}\exp\left( i k x \right) \\ &= \int_{-\infty}^\infty \frac{dk}{2 \pi} \int_{-\infty}^\infty dx' P(x', 0) \exp\left( -i k x' \right)e^{-k^2 D t}\exp\left( i k x \right) \\ &= \int_{-\infty}^\infty dx' P(x', 0) \int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left( -k^2 D t + i k (x - x') \right)\end{aligned} \hspace{\stretch{1}}(1.0.9)

Our distribution time evolution is given by convolve with a propagator function

\begin{aligned}P(x, t) = \int dx' P(x', 0) G(x', x)\end{aligned} \hspace{\stretch{1}}(1.0.10a)

\begin{aligned}G(x', x) =\int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left( -k^2 D t + i k (x - x') \right)\end{aligned} \hspace{\stretch{1}}(1.0.10b)

For t \ge 0 we can complete the square, finding that this propagator is

\begin{aligned}G(x', x) &= \int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left( -k^2 D t + i k (x - x') \right) \\ &= \exp\left( \left(\frac{i (x - x')}{2 \sqrt{D t}} \right)^2 \right)\int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left( - \left(k \sqrt{D t} + \frac{i (x - x')}{2 \sqrt{D t}} \right)^2 \right)\end{aligned} \hspace{\stretch{1}}(1.0.11)

or

\begin{aligned}\boxed{G(x', x) =\frac{1}{\sqrt{4 \pi D t}} \exp\left(-\frac{(x - x')^2}{4 D t}\right).}\end{aligned} \hspace{\stretch{1}}(1.0.12)

A schematic plot of this function as a function of t for fixed x - x' is plotted in (Fig1).

Fig1: Form of the propagator function for the diffusion equation

Gaussian

For the Gaussian of 1.3 we compute the initial time Fourier transform

\begin{aligned}\tilde{P}(k) &= \int_{-\infty}^\infty dx \frac{1}{{\sqrt{2 \pi \sigma^2}}} \exp\left(-\frac{x^2}{2 \sigma^2}-i k x \right) \\ &= \frac{1}{{\sqrt{2 \pi \sigma^2}}} \exp\left(-\left( \frac{\sqrt{ 2 \sigma^2}}{2} k i\right)^2\right)\int_{-\infty}^\infty dx \exp\left(-\left( \frac{x}{\sqrt{2 \sigma^2} } + \frac{\sqrt{ 2 \sigma^2}}{2} k i\right)^2\right) \\ &= \exp\left(-\frac{\sigma^2 k^2}{2}\right).\end{aligned} \hspace{\stretch{1}}(1.0.13)

The time evolution of the generating function is

\begin{aligned}\tilde{P}(k, t) = \exp\left(-\frac{\sigma^2 k^2}{2} - D k^2 t\right),\end{aligned} \hspace{\stretch{1}}(1.0.14)

and we can find our time evolved probability density by inverse transforming

\begin{aligned}P(x, t) &= \int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left(-\frac{\sigma^2 k^2}{2} - D k^2 t + i k x\right) \\ &= \exp\left(i \frac{x}{ 2 \sqrt{\frac{\sigma^2}{2} + D t} }\right)^2\int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left(-\left(k \sqrt{\frac{\sigma^2}{2} + D t} + i \frac{x}{ 2 \sqrt{\frac{\sigma^2}{2} + D t} }\right)^2\right)\end{aligned} \hspace{\stretch{1}}(1.0.15)

For t \ge 0 this is

\begin{aligned}\boxed{P(x, t) =\frac{1}{{\sqrt{2 \pi \left( \sigma^2 + 2 D t \right) } }}\exp\left(-\frac{x^2}{2 \left( \sigma^2 + 2 D t \right) }\right).}\end{aligned} \hspace{\stretch{1}}(1.0.16)

As a check, we see that this reproduces the t = 0 value as expected. A further check using Mathematica applying the propagator 1.0.12, also finds the same result as this manual calculation.

This is plotted for D = \sigma = 1 in (Fig2) for a couple different times t \ge 0.

Fig2: Gaussian probability density time evolution with diffusion

Boxed constant with small oscillation

The normalization of the distribution depends on the interval boundaries. With the box range given by x \in [a, a + L] we have

\begin{aligned}\int_a^{a + L} dx \left( \frac{1}{{L}} + A \sin( k_0 x) \right) dx=1 - \frac{A}{k_0} \left( \cos( k_0(a + L) ) - \cos( k_0 a ) \right)\end{aligned} \hspace{\stretch{1}}(1.0.17)

With an even range for box x \in [-L/2, L/2] this is unity.

To find the distribution at a later point in time we can utilize the propagator

\begin{aligned}P(x, t) = \int_{-L/2}^{L/2} dx' \frac{1}{{2 \sqrt{\pi D t} }} \left( \frac{1}{{L}} + A \sin( k_0 x' ) \right) \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right)\end{aligned} \hspace{\stretch{1}}(1.0.18)

Let’s write this as

\begin{aligned}P(x, t) = P_{\mathrm{rect}}(x, t) + P_{\mathrm{sin}}(x, t)\end{aligned} \hspace{\stretch{1}}(1.0.19a)

\begin{aligned}P_{\mathrm{rect}}(x, t) =\frac{1}{{2 L \sqrt{\pi D t} }} \int_{-L/2}^{L/2} dx' \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right)\end{aligned} \hspace{\stretch{1}}(1.0.19b)

\begin{aligned}P_{\mathrm{sin}}(x, t)=\frac{A}{2 \sqrt{\pi D t} } \int_{-L/2}^{L/2} dx' \sin( k_0 x' ) \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right)\end{aligned} \hspace{\stretch{1}}(1.0.19c)

Applying a u = (x' - x)/\sqrt{4 D t} change of variables for the first term, we can reduce it to a difference of error functions

\begin{aligned}P_{\mathrm{rect}}(x, t) &= \frac{1}{{L}} \int_{-L/2}^{L/2} dx' \frac{1}{{2 \sqrt{\pi D t} }} \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right) \\ &= \frac{1}{{L \sqrt{\pi}}}\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{-u^2} \\ &= \frac{1}{{2 L}} \left( \text{erf}\left( \frac{L/2 -x}{2 \sqrt{D t}} \right)-\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} \right)\right)\end{aligned} \hspace{\stretch{1}}(1.0.20)

Following Mathematica, lets introduce a two argument error function for the difference between two points

\begin{aligned}\text{erf}(z_0, z_1) \equiv \text{erf}(z_1) - \text{erf}(z_0).\end{aligned} \hspace{\stretch{1}}(1.0.21)

Using that our rectangular function’s time evolution can be written

\begin{aligned}P_{\mathrm{rect}}(x, t)=\frac{1}{{2 L}} \text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} ,\frac{L/2 -x}{2 \sqrt{D t}} \right)\end{aligned} \hspace{\stretch{1}}(1.0.22)

For L = D = 1, and t = 10^{-8}, this is plotted in (Fig3). Somewhat surprisingly, this difference of error functions does appear to result in a rectangular function for small t.

Fig3: Rectangular part of the probability distribution for very small t

The time evolution of this non-oscillation part of the probability distribution is plotted as a function of both t and x in (Fig4).

Fig4: Time evolution of the rectangular part of the probability distribution

For the sine piece we can also find a solution in terms of (complex) error functions

\begin{aligned}P_{\mathrm{sin}}(x, t) &= A \int_{-L/2}^{L/2} dx' \frac{1}{{2 \sqrt{\pi D t} }} \sin( k_0 x' ) \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right) \\ &= \frac{A}{\sqrt{\pi}}\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du \sin( k_0 ( x + 2 u \sqrt{D t} ) ) e^{-u^2} \\ &= \frac{A}{2 i \sqrt{\pi}}\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du \left( e^{ i k_0 ( x + 2 u \sqrt{D t} ) } -e^{ -i k_0 ( x + 2 u \sqrt{D t} ) }\right)e^{-u^2} \\ &= \frac{A}{2 i \sqrt{\pi}}\left( e^{ i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{ -u^2 + 2 i k_0 u \sqrt{D t} } -e^{ -i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{ -u^2 -2 i k_0 u \sqrt{D t} }\right) \\ &= \frac{A}{2 i \sqrt{\pi}}e^{ -k_0^2 D t } \left( e^{ i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{ -(u - i k_0 \sqrt{D t})^2 } -e^{ -i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{ -(u + i k_0 \sqrt{D t})^2 }\right) \\ &= \frac{A}{2 i \sqrt{\pi}}e^{ -k_0^2 D t } \left( e^{ i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}} - i k_0 \sqrt{D t}}^{\frac{L/2 - x}{2 \sqrt{Dt}} - i k_0 \sqrt{D t}} dv e^{ -v^2 }-e^{ -i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}} + i k_0 \sqrt{D t}}^{\frac{L/2 - x}{2 \sqrt{Dt}} + i k_0 \sqrt{D t}} dv e^{ -v^2 }\right) \\ &= \frac{A}{4 i }e^{ -k_0^2 D t } \left( e^{ i k_0 x }\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} - i k_0 \sqrt{D t}, \frac{L/2 - x}{2 \sqrt{Dt}} - i k_0 \sqrt{D t} \right)-e^{ -i k_0 x }\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} + i k_0 \sqrt{D t}, \frac{L/2 - x}{2 \sqrt{Dt}} + i k_0 \sqrt{D t}\right)\right)\end{aligned} \hspace{\stretch{1}}(1.0.23)

This is plotted for A = D = L = 1, k_0 = 8 \pi, and t = 10^{-8} in (Fig5).

Fig5: Verification at t -> 0 that the diffusion result is sine like

The diffusion of this, again for A = D = L = 1, k_0 = 8 \pi, and t \in [10^{-5}, 0.01] is plotted in (Fig6). Again we see that we have the expected sine for small t.

Fig6: Diffusion of the oscillatory term

Putting both the rectangular and the windowed sine portions of the probability distribution together, we have the diffusion result for the entire distribution

\begin{aligned}\boxed{\begin{aligned}P(x, t)&=\frac{1}{{2 L}} \text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} ,\frac{L/2 -x}{2 \sqrt{D t}} \right) \\ &+\frac{A}{4 i }e^{ -k_0^2 D t + i k_0 x }\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} - i k_0 \sqrt{D t}, \frac{L/2 - x}{2 \sqrt{Dt}} - i k_0 \sqrt{D t} \right) \\ &-\frac{A}{4 i }e^{ -k_0^2 D t -i k_0 x }\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} + i k_0 \sqrt{D t}, \frac{L/2 - x}{2 \sqrt{Dt}} + i k_0 \sqrt{D t}\right)\end{aligned}}\end{aligned} \hspace{\stretch{1}}(1.0.24)

It is certainly ugly looking! We see that the oscillation die off is dependent on the \exp( -k_0^2 D t) term. In time

\begin{aligned}t = \frac{1}{{k_0^2 D}},\end{aligned} \hspace{\stretch{1}}(1.0.25)

that oscillation dies away to 1/e of its initial amplitude. This dispersion is plotted at times t = 10^{-5} and t = 1/(k_0^2 D) for L = D = 1, k_0 = 8 \pi and A = 1/2 in (Fig7).

Fig7: Initial time distribution and dispersion of the oscillatory portion to 1/e of initial amplitude

Similar to the individual plots of P_{\mathrm{rect}}(x, t) and P_{\mathrm{sin}}(x, t) above, we plot the time evolution of the total probability dispersion P(x, t) in (Fig8). We see in the plots above that the rectangular portion of this distribution will also continue to flatten over time after most of the oscillation has also died off.

Fig8: Diffusion of uniform but oscillating probability distribution

An easier solution for the sinusoidal part

After working this problem, talking with classmates about how they solved it (because I was sure I’d done this windowed oscillating distribution the hard way), I now understand what was meant by “ignore boundary effects”. That is, ignore the boundary effects in the sinusoid portion of the distribution. I didn’t see how we could ignore the boundary effects because doing so would make the sine Fourier transform non-convergent. Ignoring pesky ideas like convergence we can “approximate” the Fourier transform of the windowed sine as

\begin{aligned}\tilde{P}_{\mathrm{sin}}(k) &\approx A \int_{-\infty}^\infty \sin (k_0 x) e^{-i k x} dx \\ &= \frac{A }{2 i} \int_{-\infty}^\infty \left(e^{i (k_0 - k) x} -e^{-i (k_0 + k) x} \right)dx \\ &= \frac{A \pi}{i} \left( \delta(k - k_0) - \delta(k + k_0)\right)\end{aligned} \hspace{\stretch{1}}(1.0.26)

Now we can inverse Fourier transform the diffusion result with ease since we’ve got delta functions. That is

\begin{aligned}P_{\mathrm{sin}}(x, t) &\approx \frac{1}{{2 \pi}} \frac{A \pi}{i} \int\left( \delta(k - k_0) - \delta(k + k_0)\right)e^{-D k^2 t} e^{i k x} dk \\ &= e^{-D k_0^2 t} \frac{ e^{i k_0 x} - e^{-i k_0 x}}{2 i} \\ &= e^{-D k_0^2 t} \sin( k_0 x )\end{aligned} \hspace{\stretch{1}}(1.0.27)

Question: Generating function

The Fourier transform of the probability distribution defined above \tilde{P}(k) is called the “generating function” of the distribution. Show that the n-th derivative of this generating function \partial^n P(k)/\partial k^n at the origin k = 0 is related to the n-th moment of the distribution function defined via \left\langle{{x^n}}\right\rangle = \int dx P(x) x^n. We will later see that the “partition function” in statistical mechanics is closely related to this concept of a generating function, and derivatives of this partition function can be related to thermodynamic averages of various observables.

Answer

\begin{aligned}{\left.{{ \frac{\partial^n}{\partial k^n} \tilde{P}(k) }}\right\vert}_{{k = 0}} &= {\left.{{ \frac{\partial^n}{\partial k^n} \left( \int_{-\infty}^\infty dx P(x) \exp\left( -i k x \right)\right)}}\right\vert}_{{k = 0}} \\ &= {\left.{{ \left(\int_{-\infty}^\infty dx P(x) (-i x)^n\exp\left( -i k x \right)\right)}}\right\vert}_{{k = 0}} \\ &= (-i)^n \int_{-\infty}^\infty dx P(x) x^n \\ &= (-i)^n \left\langle{{x^n}}\right\rangle\end{aligned} \hspace{\stretch{1}}(1.0.26)

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | Leave a Comment »

A period determination problem from Landau and Lifshitz

Posted by peeterjoot on January 23, 2013

A guest post by Alexandre LĂ©onard

[Click here for a PDF of this post with nicer formatting]

Question: A period problem from [1] section 11 (problem 2b)

Determine the period of oscillation, as a function of the energy, when a particle of mass m moves in a field for which the potential energy is

\begin{aligned}U = U_0 \tan^2\alpha x\end{aligned} \hspace{\stretch{1}}(1.0.1)

Answer

The answer is:

\begin{aligned}T=\frac{\pi}{\alpha}\sqrt{\frac{2m}{E+U_0}}\end{aligned} \hspace{\stretch{1}}(1.0.2)

My attempt:
I start from the formula

\begin{aligned}T(E) = \sqrt{2m}\int_{x_1(E)}^{x_2(E)}\frac{dx}{\sqrt{E-U(x)}},\end{aligned} \hspace{\stretch{1}}(1.0.3)

where x_1(E) and x_2(E) are the limits of the motion. From the symmetry of our potential, it clear that we have:

\begin{aligned}T(E) = 2\sqrt{2m}\int_{0}^{x_2(E)}\frac{dx}{\sqrt{E-U(x)}}.\end{aligned} \hspace{\stretch{1}}(1.0.4)

Now we can easily find x_2(E):

\begin{aligned}E = U_0\tan^2\alpha x_2 \quad \rightarrow \quad x_2 = \frac{1}{\alpha}\text{arctan}{\sqrt{\frac{E}{U_0}}},\end{aligned} \hspace{\stretch{1}}(1.0.5)

and we are left with the following integral:

\begin{aligned}T(E) = 2\sqrt{2m}\int_{0}^{\frac{1}{\alpha}\text{arctan}{\sqrt{\frac{E}{U_0}}}}\frac{dx}{\sqrt{E-U_0\tan^2\alpha x}}.\end{aligned} \hspace{\stretch{1}}(1.0.6)

First obvious change of variable is y = \alpha x ~ \rightarrow ~ dy = \alpha dx which gives

\begin{aligned}T(E) = \frac{2}{\alpha}\sqrt{2m}\int_{0}^{\text{arctan}{\sqrt{\frac{E}{U_0}}}}\frac{dy}{\sqrt{E-U_0\tan^2 y}}.\end{aligned} \hspace{\stretch{1}}(1.0.7)

Factorizing E under the square root leads to:

\begin{aligned}T(E) = \frac{2}{\alpha}\sqrt{\frac{2m}{E}}\int_{0}^{\text{arctan}{\sqrt{\frac{E}{U_0}}}}\frac{dy}{\sqrt{1 -\frac{U_0}{E}\tan^2 y}}.\end{aligned} \hspace{\stretch{1}}(1.0.8)

Now write a^2=U_0/E and make a change of variables that will eliminate the square root:

\begin{aligned}a\tan y = \sin z.\end{aligned} \hspace{\stretch{1}}(1.0.9)

\begin{aligned}\cos z dz = a \sec^2y dy\end{aligned} \hspace{\stretch{1}}(1.0.10)

\begin{aligned}\int_{0}^{\text{arctan}{\sqrt{\frac{E}{U_0}}}} \frac{dy}{\sqrt{1 -\frac{U_0}{E  }\tan^2 y}} = \frac{1}{a} \int_{0}^{\pi/2} \frac{\cos ^2 y \cos z dz}{\cos z} = a \int_{0}^{\pi/2} \frac{dz}  {a^2 + \sin^2 z} \end{aligned} \hspace{\stretch{1}}(1.0.11x)

where we have used the square of the relation between y and z :

\begin{aligned} a^2 (1 + \cos^2 y) = \sin^2z \cos^2y \quad\Leftrightarrow \quad \frac{a^2}{a^2+\sin^2z} = \cos^2y.\end{aligned} \hspace{\stretch{1}}(1.0.11)

Let us now calculate the definite integral:

\begin{aligned}\int_{0}^{\pi/2} \frac{dx}{a^2 + \sin^2 x}\end{aligned} \hspace{\stretch{1}}(1.0.12)

First method: brute force
Here are the big lines to solve this integral (writing everything would really take some time. The only thing to guess is the proper change of variable. In our case the good one is the following:

\begin{aligned}u = \tan \frac{x}{2} \quad \rightarrow \quad du = \frac{1}{2}(1+u^2) dx\end{aligned} \hspace{\stretch{1}}(1.0.13)

With a bit of not too complicated trigonometry, we obtain:

\begin{aligned}\sin x = \frac{2u}{1+u^2}, \quad \cos x = \frac{1-u^2}{1+u^2}\end{aligned} \hspace{\stretch{1}}(1.0.14)

Injecting these results into the integral yields to:

\begin{aligned}\int_{0}^{\pi/2} \frac{dx}{a^2 + \sin^2 x} = \int_{0}^{1} \frac{2}{(a^2 + \frac{4u^2}{(1+u^2)^2}) (1+u^2)}du = \int_{0}^{1} \frac{2(1+u^2)}{a^2(1+(2+\frac{4}{a^2})u^2 + u^4)}\end{aligned} \hspace{\stretch{1}}(1.0.15)

We have to compute the integral of the ratio between two polynomials, and we just have to follow the recipe from kinder garden. First we find the roots of the denominator, I call them a_+ and a_-:

\begin{aligned}a_+ = \frac{-2-a^2+2\sqrt{1+a}}{a^2}\end{aligned} \hspace{\stretch{1}}(1.0.16)

\begin{aligned}a_- = \frac{-2-a^2-2\sqrt{1+a}}{a^2}\end{aligned} \hspace{\stretch{1}}(1.0.17)

Note that a being positive, a_- and a_+ are negative (not hard to see …)
and we get for the integral:

\begin{aligned}\int_{0}^{1} \frac{2(1+u^2)}{a^2(1+(2+\frac{4}{a^2})u^2 + u^4)} = \frac{2}{a^2}\int_{0}^{1} \frac{(1+u^2)}{(u^2-a_+)(u^2-a_-)}\end{aligned} \hspace{\stretch{1}}(1.0.18)

Rewrite the ratio as:

\begin{aligned}\frac{(1+u^2)}{(u^2-a_+)(u^2-a_-)} = \frac{A}{(u^2-a_+)}+ \frac{B}{(u^2-a_-)}\end{aligned} \hspace{\stretch{1}}(1.0.19)

we find A = \frac{1+a_+}{a_+-a_-} and B=\frac{1+a_-}{a_--a_+}. Injecting these results into the integral, we get:

\begin{aligned}\frac{2}{a^2}\int_{0}^{1} \frac{(1+u^2)}{(u^2-a_+)(u^2-a_-)} = \frac{2}{a^2}\left[ \frac{1+a_+}{a_+-a_-}\int_0^1 \frac{1}{u^2+|a_+|}du + \frac{1+a_+}{a_--a+}\int_0^1\frac{1}{u^2+|a_-|}du\right]\end{aligned} \hspace{\stretch{1}}(1.0.20)

It is now possible, to integrate:

\begin{aligned}\frac{2}{a^2}\int_{0}^{1} \frac{(1+u^2)}{(u^2-a_+)(u^2-a_-)} = \frac{2}{a^2}\left[ \frac{1+a_+}{a_+-a_-}\frac{1}{\sqrt{|a_+|}}\text{arctan} \frac{1}{\sqrt{|a_+|}}+ \frac{1+a_+}{a_--a+}\frac{1}{\sqrt{|a_-|}}\text{arctan} \frac{1}{\sqrt{|a_-|}} \right]\end{aligned} \hspace{\stretch{1}}(1.0.21)

This result is correct, but I wouldn’t put all the pieces together to get the answer even if it must be possible…

Second method: smart but uses deep results from mathematics
This method uses the Residue Theorem. I am not sure it is taught to many students but physicists, mathematicians, maybe engineers and I think that is all. Let me recall the problem: find the value of the definite integral:

\begin{aligned}\int_{0}^{\pi/2} \frac{dx}{a^2 + \sin^2 x}\end{aligned} \hspace{\stretch{1}}(1.0.22)

From the symmetry of the integrand we see that it is equivalent to calculate:

\begin{aligned}\frac{1}{4}\int_{0}^{2\pi} \frac{dx}{a^2 + \sin^2 x}\end{aligned} \hspace{\stretch{1}}(1.0.23)

Let me write the \sin as a sum of complex numbers:

\begin{aligned} \sin x = \frac{e^{ix}-e^{-ix}}{2i}\end{aligned} \hspace{\stretch{1}}(1.0.24)

Setting z=e^{ix} we have dz = izdx. Now, thinking of this integral as a contour integral in the complex plane, we see that the path is just the unit circle: e^{ix} with 0\leq x \leq 2\pi. So we must compute:

\begin{aligned}\frac{1}{4}\int_{0}^{2\pi} \frac{dx}{a^2 + \sin^2 x} =\frac{1}{4} \int_{C} \frac{-i}{a^2 + \left(\frac{z-z^{-1}}{2i}\right)^2} \frac{dz}{z} = i \int_C \frac{z}{z^4-(2+4a^2)z^2+1}dz\end{aligned} \hspace{\stretch{1}}(1.0.25)

Here we set u=z^2. Now we have to loop twice on the unit circle because u=e^{2ix}. But it is the same than looping only once and multiply the result by 2. However, this factor 2 cancels with the factor 2 from: du=2zdz and we get:

\begin{aligned}i \int_C \frac{z}{z^4-(2+4a^2)z^2+1}dz = i \int_C \frac{1}{u^2-(2+4a^2)u+1}du\end{aligned} \hspace{\stretch{1}}(1.0.26)

Here again we look for the roots of the denominators and find out which of those is contained inside the unit circle. The roots are given by:

\begin{aligned}u_+ = 1+2a^2+2a\sqrt{1+a^2}\end{aligned} \hspace{\stretch{1}}(1.0.27)

\begin{aligned}u_- = 1+2a^2-2a\sqrt{1+a^2}\end{aligned} \hspace{\stretch{1}}(1.0.28)

From the expression of u_+ it is clear that u_+ lies on the real axis and is bigger than 1. Now, a>0 and by looking carefully to u_- we can figure out that 0 < u_- < 1.
We have to find the residue of the integrand at the pole (the root) inside the unit circle. This is given by:

\begin{aligned}\text{Res}_{u_-} = \lim_{u\rightarrow u_-} \frac{(u-u_-)}{(u-u_+)(u-u_-)} = \frac{1}{u_- - u_+} = \frac{-1}{4a\sqrt{1+a^2}}\end{aligned} \hspace{\stretch{1}}(1.0.29)

And finally, the residue theorem tells us that the integral is equal to: 2\pi i \sum \text{Res}, where the sum is over all the poles contained inside the path. So we get:

\begin{aligned}\int_{0}^{\pi/2} \frac{dx}{a^2 + \sin^2 x} =i \int_C \frac{1}{u^2-(2+4a^2)u+1}du = i 2\pi i  \frac{-1}{4a\sqrt{1+a^2}} = \frac{\pi}{2a\sqrt{1+a^2}}\end{aligned} \hspace{\stretch{1}}(1.0.30)

Putting everything together leads to the final correct answer:

\begin{aligned}T(E) = \frac{2}{\alpha}\sqrt{\frac{2m}{E}} \sqrt{\frac{U_0}{E}}\frac{\pi}{2\sqrt{\frac{U_0}{E}}\sqrt{1+\frac{U_0}{E}}} =\frac{\pi}{\alpha}\sqrt{\frac{2m}{E+U_0}} \end{aligned} \hspace{\stretch{1}}(1.0.31)

References

[1] LD Landau and EM Lifshitz. Classical mechanics. 1960.

Posted in Math and Physics Learning. | Tagged: , , , , | Leave a Comment »

PHY452H1S Basic Statistical Mechanics. Lecture 5: Motion in phase space. Liouville and Poincar’e theorems. Taught by Prof. Arun Paramekanti

Posted by peeterjoot on January 22, 2013

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer

Peeter’s lecture notes from class. May not be entirely coherent.

Motion in phase space

Classical system: \mathbf{x}_i, \mathbf{p}_i with dimensionality

\begin{aligned}\underbrace{2}_{x, p}\underbrace{d}_{\text{space dimension}}\underbrace{N}_{\text{Number of particles}}\end{aligned} \hspace{\stretch{1}}(1.2.1)

Hamiltonian H is the “energy function”

\begin{aligned}H = \underbrace{\sum_{i = 1}^N \frac{\mathbf{p}_i^2}{2m} }_{\text{Kinetic energy}}+ \underbrace{\sum_{i = 1}^N V(\mathbf{x}_i) }_{\text{Potential energy}}+ \underbrace{\sum_{i < j}^N \Phi(\mathbf{x}_i - \mathbf{x}_j)}_{\text{Internal energy}}\end{aligned} \hspace{\stretch{1}}(1.2.2)

\begin{aligned}\mathbf{\dot{p}}_i = \mathbf{F} = \text{force}\end{aligned} \hspace{\stretch{1}}(1.0.3a)

\begin{aligned}\mathbf{\dot{x}}_i = \frac{\mathbf{p}_i}{m}\end{aligned} \hspace{\stretch{1}}(1.0.3b)

Expressed in terms of the Hamiltonian this is

\begin{aligned}\dot{p}_{i_\alpha} = - \frac{\partial {H}}{\partial {x_{i_\alpha}}}\end{aligned} \hspace{\stretch{1}}(1.0.4a)

\begin{aligned}\dot{x}_{i_\alpha} = \frac{\partial {H}}{\partial {p_{i_\alpha}}}\end{aligned} \hspace{\stretch{1}}(1.0.4b)

In phase space we can have any number of possible trajectories as illustrated in (Fig1).

Fig1: Disallowed and allowed phase space trajectories

Liouville’s theorem

We are interested in asking the question of how the density of a region in phase space evolves as illustrated in (Fig2)

Fig2: Evolution of a phase space volume

We define a phase space density

\begin{aligned}\rho(p_{i_\alpha}, x_{i_\alpha}, t),\end{aligned} \hspace{\stretch{1}}(1.0.5)

and seek to demonstrate Liouville’s theorem, that the phase space density does not change. To do so, consider the total time derivative of the phase space density

\begin{aligned}\frac{d{{\rho}}}{dt} &= \frac{\partial {\rho}}{\partial {t}} + \sum_{i_\alpha} \frac{\partial {p_{i_\alpha}}}{\partial {t}} \frac{\partial {\rho}}{\partial {p_{i_\alpha}}} + \frac{\partial {x_{i_\alpha}}}{\partial {t}} \frac{\partial {\rho}}{\partial {x_{i_\alpha}}} \\ &= \frac{\partial {\rho}}{\partial {t}} + \sum_{i_\alpha} \frac{\partial {p_{i_\alpha}}}{\partial {t}} \frac{\partial {\rho \dot{p}_{i_\alpha}}}{\partial {p_{i_\alpha}}} + \frac{\partial {x_{i_\alpha}}}{\partial {t}} \frac{\partial {\rho \dot{x}_{i_\alpha}}}{\partial {x_{i_\alpha}}} - \rho \left(\frac{\partial {\dot{p}_{i_\alpha}}}{\partial {p_{i_\alpha}}} +\frac{\partial {\dot{x}_{i_\alpha}}}{\partial {x_{i_\alpha}}} \right) \\ &= \frac{\partial {\rho}}{\partial {t}} + \underbrace{\sum_{i_\alpha} \left(\frac{\partial {p_{i_\alpha}}}{\partial {t}} \frac{\partial {(\rho \dot{p}_{i_\alpha})}}{\partial {p_{i_\alpha}}} + \frac{\partial {x_{i_\alpha}}}{\partial {t}} \frac{\partial {(\rho \dot{x}_{i_\alpha})}}{\partial {x_{i_\alpha}}} \right)}_{\equiv \boldsymbol{\nabla} \cdot \mathbf{j}}- \rho \sum_{i_\alpha} \underbrace{\left(-\frac{\partial^2 H}{\partial p_{i_\alpha} \partial x_{i_\alpha}}+\frac{\partial^2 H}{\partial x_{i_\alpha} \partial p_{i_\alpha}}\right)}_{= 0}\end{aligned} \hspace{\stretch{1}}(1.0.6)

We’ve implicitly defined a current density \mathbf{j} above by comparing to the continuity equation

\begin{aligned}\frac{\partial {\rho}}{\partial {t}} + \boldsymbol{\nabla} \cdot \mathbf{j} = 0\end{aligned} \hspace{\stretch{1}}(1.0.7)

Here we have

\begin{aligned}\rho(\dot{x}_{i_\alpha}, \dot{p}_{i_\alpha}) \rightarrow \mbox{current in phase space}\end{aligned} \hspace{\stretch{1}}(1.0.8)

Usually we have

\begin{aligned}\mathbf{j}_{\mathrm{usual}} \sim \rho \mathbf{v}\end{aligned} \hspace{\stretch{1}}(1.0.9a)

\begin{aligned}\mathbf{j} = -D \boldsymbol{\nabla} \rho\end{aligned} \hspace{\stretch{1}}(1.0.9b)

but we don’t care about this diffusion relation, just the continuity equation equivalent

\begin{aligned}v \frac{\partial {\rho}}{\partial {t}} + \boldsymbol{\nabla} \cdot \mathbf{j} = 0\end{aligned} \hspace{\stretch{1}}(1.0.10)

The implication is that

\begin{aligned}\frac{d{{\rho}}}{dt} = 0.\end{aligned} \hspace{\stretch{1}}(1.0.11)

Flow in phase space is very similar to an “incompressible fluid”.

Time averages, and Poincar\’e recurrence theorem

We want to look at how various observables behave over time

\begin{aligned}\overline{A} = \frac{1}{{T}} \int_0^T dt \rho(x, p, t) A(p, x)\end{aligned} \hspace{\stretch{1}}(1.0.12)

We’d like to understand how such averages behave over long time intervals, looking at \rho(x, p, t \rightarrow \infty).

This long term behaviour is described by the Poincar\’e recurrence theorem. If we wait long enough a point in phase space will come arbitrarily close to its starting point, recurring or “closing the trajectory loops”.

A simple example of a recurrence is an undamped SHO, such as a pendulum. That pendulum bob when it hits the bottom of the cycle will have the same velocity (and position) each time it goes through a complete cycle. If we imagine a much more complicated system, such as N harmonic oscillators, each with different periods, we can imagine that it will take infinitely long for this cycle or recurrence to occur, and the trajectory will end up sweeping through all of phase space.

Posted in Math and Physics Learning. | Tagged: , , , , | Leave a Comment »

PHY452H1S Basic Statistical Mechanics. Problem Set 1: Binomial distributions

Posted by peeterjoot on January 20, 2013

[Click here for a PDF of this post with nicer formatting]

Disclaimer

This is an ungraded set of answers to the problems posed.

Question: Limiting form of the binomial distribution

Starting from the simple case of the binomial distribution

\begin{aligned}P_N(X) = 2^{-N} \frac{N!}{\left(\frac{N + X}{2}\right)!\left(\frac{N - X}{2}\right)!}\end{aligned} \hspace{\stretch{1}}(1.0.1)

derive the Gaussian distribution which results when N \gg 1 and {\left\lvert{X}\right\rvert} \ll N.

Answer

We’ll work with the logarithms of P_N(X).

Note that the logarithm of the Stirling approximation takes the form

\begin{aligned}\ln a! &\approx \ln \sqrt{2\pi} + \frac{1}{{2}} \ln a + a \ln a - a \\ &=\ln \sqrt{2\pi} + \left( a + \frac{1}{{2}} \right) \ln a - a\end{aligned} \hspace{\stretch{1}}(1.0.2)

Using this we have

\begin{aligned}\ln \left((N + X)/2\right)!=\ln \sqrt{2 \pi}+\left(\frac{N + 1 + X}{2} \right)\left(\ln \left(1 + \frac{X}{N}\right)+ \ln \frac{N}{2}\right)- \frac{N + X}{2}\end{aligned} \hspace{\stretch{1}}(1.0.3)

Adding \ln \left( (N + X)/2 \right)! + \ln \left( (N - X)/2 \right)!, we have

\begin{aligned}2 \ln \sqrt{2 \pi}-N+\left(\frac{N + 1 + X}{2} \right)\left(\ln \left(1 + \frac{X}{N}\right)+ \ln \frac{N}{2}\right)+\left(\frac{N + 1 - X}{2} \right)\left(\ln \left(1 - \frac{X}{N}\right)+ \ln \frac{N}{2}\right)=2 \ln \sqrt{2 \pi}-N+\left(\frac{N + 1}{2} \right)\left(\ln \left(1 - \frac{X^2}{N^2}\right)+ 2 \ln \frac{N}{2}\right)+\frac{X}{2}\left( \ln \left( 1 + \frac{X}{N} \right)- \ln \left( 1 - \frac{X}{N} \right)\right)\end{aligned} \hspace{\stretch{1}}(1.0.4)

Recall that we can expand the log around 1 with the slowly converging Taylor series

\begin{aligned}\ln( 1 + x) = x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4}\end{aligned} \hspace{\stretch{1}}(1.0.5a)

\begin{aligned}\ln( 1 - x) = -x - \frac{x^2}{2} - \frac{x^3}{3} - \frac{x^4}{4},\end{aligned} \hspace{\stretch{1}}(1.0.5b)

but if x \ll 1 the first order term will dominate, so in this case where we assume X \ll N, we can approximate this sum of factorial logs to first order as

\begin{aligned}2 \ln \sqrt{2 \pi} -N+\left(\frac{N + 1}{2} \right)\left(- \frac{X^2}{N^2}+ 2 \ln \frac{N}{2}\right)+\frac{X}{2}\left( \frac{X}{N} + \frac{X}{N}\right) &= 2 \ln \sqrt{2 \pi} -N+ \frac{X^2}{N} \left( - \frac{N + 1}{2N} + 1\right)+ (N + 1) \ln \frac{N}{2} &\approx 2 \ln \sqrt{2 \pi} -N+ \frac{X^2}{2 N} + (N + 1) \ln \frac{N}{2}.\end{aligned} \hspace{\stretch{1}}(1.0.6)

Putting the bits together, we have

\begin{aligned}\ln P_N(X) &\approx - N \ln 2 + \left( N + \frac{1}{{2}}\right) \ln N - \not{{N}} - \ln \sqrt{2 \pi} + \not{{N}} -\frac{X^2}{2N} - (N + 1) \ln \frac{N}{2} \\ &= \left(-\not{{N}} + (\not{{N}} + 1) \ln 2\right)+\left(\not{{N}} + \frac{1}{{2}} - \not{{N}} - 1\right) \ln N- \ln \sqrt{2 \pi} - \frac{X^2}{2N} \\ &= \ln \left(\frac{2}{\sqrt{2 \pi N}}\right)-\frac{X^2}{2 N}\end{aligned} \hspace{\stretch{1}}(1.0.7)

Exponentiating gives us the desired result

\begin{aligned}\boxed{P_N(X) \rightarrow \frac{2}{\sqrt{2 \pi N}} e^{-\frac{X^2}{2 N}}.}\end{aligned} \hspace{\stretch{1}}(1.0.8)

Question: Binomial distribution for biased coin

Consider the more general case of a binomial distribution where the probability of a head is r and a tail is (1 - r) (a biased coin). With \text{head} = -1 and \text{tail} = +1, obtain the binomial distribution P_N(r,X) for obtaining a total of X from N coin tosses. What is the limiting form of this distribution when N \gg 1 and \left\langle{X - \left\langle{X}\right\rangle}\right\rangle \ll N? The latter condition simply means that I need to carry out any Taylor expansions in X about its mean value \left\langle{{X}}\right\rangle. The mean \left\langle{{X}}\right\rangle can be easily computed first in terms of “r”.

Answer

Let’s consider 1, 2, 3, and N tosses in sequence to understand the pattern.

1 toss

The base case has just two possibilities

  1. Heads, P = r, X = -1
  2. Tails, P = (1 - r), X = 1

If k = 0,1 for X = -1, 1 respectively, we have

\begin{aligned}P_1(r, X) = r^{1 - k} (1 - r)^{k}\end{aligned} \hspace{\stretch{1}}(1.0.9)

As a check, when r = 1/2 we have P_1(X) = 1/2

2 tosses

Our sample space is now a bit bigger

  1. (h,h), P = r^2, X = -2
  2. (h,t), P = r (1 - r), X = 0
  3. (t,h), P = r (1 - r), X = 0
  4. (t,t), P = (1 - r)^2, X = 2

Here P is the probability of the ordered sequence, but we are interested only in the probability of each specific value of X. For X = 0 there are \binom{2}{1} = 2 ways of picking a heads, tails combination.

Enumerating the probabilities, as before, with k = 0, 1, 2 for X = -1, 0, 1 respectively, we have

\begin{aligned}P_2(r, X) = r^{2 - k} (1 - r)^{k} \binom{2}{k}\end{aligned} \hspace{\stretch{1}}(1.0.10)

3 tosses

Increasing our sample space by one more toss our possibilities for all ordered triplets of toss results is

  1. (h,h,h), P = r^3, X = -3
  2. (h,h,t), P = r^2(1 - r), X = -1
  3. (h,t,h), P = r^2(1 - r), X = -1
  4. (h,t,t), P = r(1 - r)^2, X = 1
  5. (t,h,h), P = r^2(1 - r), X = -1
  6. (t,h,t), P = r(1 - r)^2, X = 1
  7. (t,t,h), P = r(1 - r)^2, X = 1
  8. (t,t,t), P = r (1 - r), X = 0
  9. (t,t,t), P = (1 - r)^3, X = 3

Here P is the probability of the ordered sequence, but we are still interested only in the probability of each specific value of X. We see that we have
\binom{3}{1} = \binom{3}{2} = 3 ways of picking some ordering of either (h,h,t) or (t,t,h)

Now enumerating the possibilities with k = 0, 1, 2, 3 for X = -3, -1, 1, 3 respectively, we have

\begin{aligned}P_3(r, X) = r^{3 - k} (1 - r)^{k} \binom{3}{k}\end{aligned} \hspace{\stretch{1}}(1.0.11)

n tosses

To generalize we need a mapping between our random variable X, and the binomial index k, but we know what that is from the fair coin problem, one of (N-X)/2 or (N + X)/2. To get the signs right, let’s evaluate (N \pm X)/2 for N = 3 and X \in \{3, -1, 1, 3\}

Mapping between k and (N \pm X)/2 for N = 3:

X (N-X)/2 (N+X)/2
-3 3 0
-1 2 1
1 1 2
3 0 3

Using this, we see that the generalization to unfair coins of the binomial distribution is

\begin{aligned}\boxed{P_N(r, X) = r^{\frac{N-X}{2}} (1 - r)^{\frac{N+X}{2}} \frac{N!}{\left(\frac{N + X}{2}\right)!\left(\frac{N - X}{2}\right)!}}\end{aligned} \hspace{\stretch{1}}(1.0.12)

Checking against the fair result, we see that we have the 1/2^N factor when r = 1/2 as expected. Let’s check for X = -1 (two heads, one tail) to see if the exponents are right. That is

\begin{aligned}P_3(r, -1) = r^{\frac{3 + 1}{2}} (1 - r)^{\frac{3 - 1}{2}} \frac{3!}{\left(\frac{3 - 1}{2}\right)!\left(\frac{3 + 1}{2}\right)!}=r^2 (1-r) \frac{3!}{1! 2!}= r^2 (1 - r)\end{aligned} \hspace{\stretch{1}}(1.0.13)

Good, we’ve got a r^2 (two heads) term as desired.

Limiting form

To determine the limiting behavior, we can utilize the Central limit theorem. We first have to calculate the mean and the variance for the N=1 case. The first two moments are

\begin{aligned}\left\langle{{X}}\right\rangle &= -1 r + 1 (1-r) \\ &= 1 - 2 r\end{aligned} \hspace{\stretch{1}}(1.0.14a)

\begin{aligned}\left\langle{{X^2}}\right\rangle &= (-1)^2 r + 1^2 (1-r) \\ &= 1\end{aligned} \hspace{\stretch{1}}(1.0.14b)

and the variance is

\begin{aligned}\left\langle{{X^2}}\right\rangle -\left\langle{{X}}\right\rangle^2 &= 1 - (1 - 2r)^2 \\ &= 1 - ( 1 - 4 r + 4 r^2 ) \\ &= 4 r - 4 r^2 \\ &= 4 r ( 1 - r )\end{aligned} \hspace{\stretch{1}}(1.0.15)

The Central Limit Theorem gives us

\begin{aligned}P_N(r, X) \rightarrow \frac{1}{{ \sqrt{8 \pi N r (1 - r) }}} \exp\left(- \frac{( X - N (1 - 2 r) )^2}{8 N r ( 1 - r )}\right),\end{aligned} \hspace{\stretch{1}}(1.0.16)

however, we saw in [1] that this theorem was derived for continuous random variables. Here we have random variables that only take on either odd or even integer values, with parity depending on whether N is odd or even. We’ll need to double the CLT result to account for this. This gives us

\begin{aligned}\boxed{P_N(r, X) \rightarrow \frac{1}{ \sqrt{2 \pi N r (1 - r) }} \exp\left(- \frac{( X - N (1 - 2 r) )^2}{8 N r ( 1 - r )}\right)}\end{aligned} \hspace{\stretch{1}}(1.0.17)

As a check we note that for r = 1/2 we have r(1-r) = 1/4 and 1 - 2r = 0, so we get

\begin{aligned}P_N(1/2, X) \rightarrow \frac{2}{ \sqrt{2 \pi N }} \exp\left(- \frac{ X^2}{2 N }\right).\end{aligned} \hspace{\stretch{1}}(1.0.18)

Observe that both this and 1.0.8 do not integrate to unity, but to 2. This is expected given the parity of the discrete random variable X. An integral normalization check is really only approximating the sum over integral values of our discrete random variable, and here we want to skip half of those values.

References

[1] Peter Young. Proof of the central limit theorem in statistics, 2009. URL http://physics.ucsc.edu/ peter/116C/clt.pdf. [Online; accessed 13-Jan-2013].

Posted in Math and Physics Learning. | Tagged: , , , , , , , , | Leave a Comment »

PHY452H1S Basic Statistical Mechanics. Lecture 4: Maxwell distribution for gases. Taught by Prof. Arun Paramekanti

Posted by peeterjoot on January 17, 2013

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. May not be entirely coherent.

Review: Lead up to Maxwell distribution for gases

For the random walk, after a number of collisions N_{\mathrm{c}}, we found that a particle (labeled the ith) will have a velocity

\begin{aligned}\mathbf{v}_i(N_{\mathrm{c}}) = \mathbf{v}_i(0) + \sum_{l = 1}^{N_{\mathrm{c}}} \Delta \mathbf{v}_i(l)\end{aligned} \hspace{\stretch{1}}(1.1.1)

We argued that the probability distribution for finding a velocity \mathbf{v} was as in

\begin{aligned}\mathcal{P}_{N_{\mathrm{c}}}(\mathbf{v}_i) \propto \exp\left(-\frac{(\mathbf{v}_i - \mathbf{v}_i(0))^2}{ 2 N_{\mathrm{c}}}\right)\end{aligned} \hspace{\stretch{1}}(1.1.2)

Fig1: Velocity distribution found without considering kinetic energy conservation

 

What went wrong?

However, we know that this must be wrong, since we require

\begin{aligned}T = \frac{1}{{2}} \sum_{i = 1}^{n} \mathbf{v}_i^2 = \mbox{conserved}\end{aligned} \hspace{\stretch{1}}(1.2.3)

Where our argument went wrong is that when the particle has a greater than average velocity, the effect of a collision will be to slow it down. We have to account for

  1. Fluctuations \rightarrow “random walk”
  2. Dissipation \rightarrow “slowing down”

There were two ingredients to diffusion (the random walk), these were

  1. Conservation of particles

    \begin{aligned}\frac{\partial {c}}{\partial {t}} + \frac{\partial {j}}{\partial {x}} = 0\end{aligned} \hspace{\stretch{1}}(1.2.4)

    We can also think about a conservation of a particles in a velocity space

    \begin{aligned}\frac{\partial {c}}{\partial {t}}(v, t) + \frac{\partial {j_v}}{\partial {v}} = 0\end{aligned} \hspace{\stretch{1}}(1.2.5)

    where j_v is a probability current in this velocity space.

  2. Fick’s law in velocity space takes the form

    \begin{aligned}j_v = -D \frac{\partial {c}}{\partial {v}}(v, t)\end{aligned} \hspace{\stretch{1}}(1.2.6)

The diffusion results in an “attempt” to flatten the distribution of the concentration as in (Fig 2).

Fig2: A friction like term is require to oppose the diffusion pressure

We’d like to add to the diffusion current an extra frictional like term

\begin{aligned}j_v = -D \frac{\partial {c}}{\partial {v}}(v, t) - \eta v c(v)\end{aligned} \hspace{\stretch{1}}(1.2.7)

We want something directed opposite to the velocity and the concentration

\begin{aligned}\text{Diffusion current} \equiv -D \frac{\partial {c}}{\partial {v}}(v, t)\end{aligned} \hspace{\stretch{1}}(1.0.8a)

\begin{aligned}\text{Dissipation current} \equiv - \eta v c(v, t)\end{aligned} \hspace{\stretch{1}}(1.0.8b)

This gives

\begin{aligned}\frac{\partial {c}}{\partial {t}}(v, t) &= -\frac{\partial {j_v}}{\partial {v}} \\ &= -\frac{\partial {}}{\partial {v}} \left(-D \frac{\partial {c}}{\partial {v}}(v, t) - \eta v c(v)\right) \\ &= D \frac{\partial^2 {{c}}}{\partial {{v}}^2}(v, t) + \eta \frac{\partial {}}{\partial {v}}\left(v c(v, t)\right)\end{aligned} \hspace{\stretch{1}}(1.0.8b)

Can we find a steady state solution to this equation when t \rightarrow \infty? For such a steady state we have

\begin{aligned}0 = \frac{d^2 c}{dv^2} + \eta \frac{d}{dv} \left(v c\right)\end{aligned} \hspace{\stretch{1}}(1.0.10)

Integrating once we have

\begin{aligned}\frac{d c}{dv} = -\eta v c + \text{constant}\end{aligned} \hspace{\stretch{1}}(1.0.11)

supposing that dc/dv = 0 at v = 0, integrating once more we have

\begin{aligned}\boxed{c(v) \propto \exp\left(- \frac{\eta v^2}{2 D}\right).}\end{aligned} \hspace{\stretch{1}}(1.0.12)

This is the Maxwell-Boltzmann distribution, illustrated in (Fig3).

Fig3: Maxwell-Boltzmann distribution

The concentration c(v) has a probability distribution.

Calculating \left\langle{{v^2}}\right\rangle from this distribution we can identify the D/\eta factor.

\begin{aligned}\left\langle{{v^2}}\right\rangle &= \frac{\int v^2 e^{- \eta v^2/2D} dv}{\int e^{- \eta v^2/2D} dv} \\ &= \frac{\frac{D}{\eta} \int v \frac{d}{dv} \left( -e^{- \eta v^2/2D} \right) dv}{\int e^{- \eta v^2/2D} dv} \\ &= -\frac{D}{\eta} \frac{\int -e^{- \eta v^2/2D} dv}{\int e^{- \eta v^2/2D} dv} \\ &= \frac{D}{\eta}.\end{aligned} \hspace{\stretch{1}}(1.13)

This also happens to be the energy in terms of temperature (we can view this as a definition of the temperature for now), writing

\begin{aligned}\frac{1}{{2}} m \left\langle{{\mathbf{v}^2}}\right\rangle = \frac{1}{{2}} m \left( \frac{D}{\eta} \right) = \frac{1}{{2}} k_{\mathrm{B}} T\end{aligned} \hspace{\stretch{1}}(1.0.14)

Here

\begin{aligned}k_{\mathrm{B}} = \mbox{Boltzmann constant}\end{aligned} \hspace{\stretch{1}}(1.0.15a)

\begin{aligned}T = \mbox{absolute temperature}\end{aligned} \hspace{\stretch{1}}(1.0.15b)

Equilibrium steady states

Fluctuations \leftrightarrow Dissipation

\begin{aligned}\boxed{\frac{D}{\eta} = \frac{ k_{\mathrm{B}} T}{m}}\end{aligned} \hspace{\stretch{1}}(1.0.16)

This is a specific example of the more general Fluctuation-Dissipation theorem.

Generalizing to 3D

Fick’s law and the continuity equation in 3D are respectively

\begin{aligned}\mathbf{j} = -D \boldsymbol{\nabla}_\mathbf{v} c(\mathbf{v}, t) - \eta \mathbf{v} c(\mathbf{v}, t)\end{aligned} \hspace{\stretch{1}}(1.0.17a)

\begin{aligned}\frac{\partial {}}{\partial {t}} c(\mathbf{v}, t) + \boldsymbol{\nabla}_\mathbf{v} \cdot \mathbf{j}(\mathbf{v}, t) = 0\end{aligned} \hspace{\stretch{1}}(1.0.17b)

As above we have for the steady state

\begin{aligned}0 &= \frac{\partial {}}{\partial {t}} c(\mathbf{v}, t) \\ &= \boldsymbol{\nabla}_\mathbf{v} \cdot \left( -D \boldsymbol{\nabla}_\mathbf{v} c(\mathbf{v}, t) - \eta \mathbf{v} c(\mathbf{v}, t)\right) \\ &= -D \boldsymbol{\nabla}^2_\mathbf{v} c - \eta \boldsymbol{\nabla}_\mathbf{v} \cdot (\mathbf{v} c)\end{aligned} \hspace{\stretch{1}}(1.0.17b)

Integrating once over all space

\begin{aligned}D \boldsymbol{\nabla}_\mathbf{v} c = -\eta \mathbf{v} c + \text{vector constant, assumed zero}\end{aligned} \hspace{\stretch{1}}(1.0.19)

This is three sets of equations, one for each component v_\alpha of \mathbf{v}

\begin{aligned}\frac{\partial {c}}{\partial {v_\alpha}} = -\frac{\eta}{D} v_\alpha c \end{aligned} \hspace{\stretch{1}}(1.0.20)

So that our steady state equation is

\begin{aligned}c(\mathbf{v}, t \rightarrow \infty) \propto \exp\left(-\frac{v_x^2 +v_y^2 +v_z^2 }{ 2 (D/\eta) }\right).\end{aligned} \hspace{\stretch{1}}(1.0.21)

Computing the average 3D squared velocity for this distribution, we have

\begin{aligned}\frac{1}{{2}} m \left\langle{{\mathbf{v} \cdot \mathbf{v}}}\right\rangle &= \frac{ \int dv_x dv_y dv_z \left( v_x^2 + v_y^2 + v_z^2 \right) \exp \left( -\frac{ v_x^2 + v_y^2 + v_z^2 }{ 2 (D/\eta) } \right)}{ \int dv_x dv_y dv_z \exp \left( -\frac{ v_x^2 + v_y^2 + v_z^2 }{ 2 (D/\eta) } \right)} \\ &= \frac{ \int dv_x v_x^2 \exp \left( -\frac{ v_x^2 }{ 2 (D/\eta) } \right)}{ \int dv_x \exp \left( -\frac{ v_x^2 }{ 2 (D/\eta) } \right)}+\frac{ \int dv_y v_y^2 \exp \left( -\frac{ v_y^2 }{ 2 (D/\eta) } \right)}{ \int dv_y \exp \left( -\frac{ v_y^2 }{ 2 (D/\eta) } \right)}+\frac{ \int dv_z v_z^2 \exp \left( -\frac{ v_z^2 }{ 2 (D/\eta) } \right)}{ \int dv_z \exp \left( -\frac{ v_z^2 }{ 2 (D/\eta) } \right)} \\ &= 3 \left( \frac{1}{2} k_{\mathrm{B}} T\right) \\ &= \frac{3}{2} k_{\mathrm{B}} T\end{aligned} \hspace{\stretch{1}}(1.0.21)

For each component v_\alpha the normalization kills off all the contributions for the other components, leaving us with the usual 3 k_{\mathrm{B}} T/2 ideal gas law kinetic energy.

Phase space

Now let’s switch directions a bit and look at how to examine a more general system described by the phase space of generalized coordinates

\begin{aligned}\{ x_{i_\alpha}(t),p_{i_\alpha}(t) \}\end{aligned} \hspace{\stretch{1}}(1.0.23)

Here

  1. i = \mbox{molecule or particle number}
  2. \alpha \in \{x, y, z\}
  3. \mbox{Dimension} = N_{\text{particles}} \times 2 \times d, where d is the physical space dimension.

The motion in phase space will be governed by the knowledge of how each of these coordinates change for all the particles. Assuming a Hamiltonian H and recalling that H = \dot{x} p - \mathcal{L}, gives us

\begin{aligned}\frac{\partial {H}}{\partial {p}} = \dot{x}\end{aligned} \hspace{\stretch{1}}(1.0.24a)

\begin{aligned}\frac{\partial {H}}{\partial {x}} = -\frac{\partial {\mathcal{L}}}{\partial {x}} = - \frac{d{{p}}}{dt},\end{aligned} \hspace{\stretch{1}}(1.0.24b)

we have for the following set of equations describing the entire system

\begin{aligned}\frac{d{{}}}{dt} x_{i_\alpha}(t) = \frac{\partial {H}}{\partial {p_{i_\alpha}}}\end{aligned} \hspace{\stretch{1}}(1.0.25a)

\begin{aligned}\frac{d{{}}}{dt} p_{i_\alpha}(t)= -\frac{\partial {H}}{\partial {x_{i_\alpha}}}\end{aligned} \hspace{\stretch{1}}(1.0.25b)

Example, 1D SHO

\begin{aligned}H = \frac{1}{{2}} \frac{p^2}{m} + \frac{1}{{2}} k x^2\end{aligned} \hspace{\stretch{1}}(1.0.26)

This has phase space trajectories as in (Fig4).

Fig4: Classical SHO phase space trajectories

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , | Leave a Comment »

PHY452H1S Basic Statistical Mechanics. Lecture 3: Random walk. Taught by Prof. Arun Paramekanti

Posted by peeterjoot on January 16, 2013

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. May not be entirely coherent.

Spatial Random walk

One dimensional case

Fig 1: One dimensional random walk

 

With a time interval \delta t, our total time is

\begin{aligned}t = N \delta t.\end{aligned} \hspace{\stretch{1}}(1.1.1)

We form a random variable for the total distance moved

\begin{aligned}X = \sum_{i = 1}^N \delta x_i\end{aligned} \hspace{\stretch{1}}(1.1.2)

\begin{aligned}\left\langle{{X}}\right\rangle = \sum_{i = 1}^N \left\langle{{\delta x_i}}\right\rangle = 0\end{aligned} \hspace{\stretch{1}}(1.1.3)

\begin{aligned}\left\langle{{X^2}}\right\rangle &= \left\langle{{\sum_{i = 1}^N \delta x_i\sum_{j = 1}^N \delta x_j}}\right\rangle \\ &= \sum_{i,j = 1}^N \left\langle{{\delta x_i \delta x_j}}\right\rangle \\ &= \sum_{i = 1}^N \left\langle{{(\delta x_i)^2 }}\right\rangle \\ &= \sum_{i = 1}^N \left\langle{{(\pm \delta x)^2 }}\right\rangle \\ &= (\delta x)^2 \sum_{i = 1}^N 1 \\ &= N (\delta x)^2\end{aligned} \hspace{\stretch{1}}(1.1.3)

In the above, I assume that the cross terms were killed with an assumption of uncorrelation.

Two dimensional case

Fig 2: Two dimensional random walk

The two dimensional case can be used for either spatial generalization of the above, or a one dimensional problem with time evolution as illustrated in (Fig 3)

Fig 3: Spatial and temporal evolution

 

\begin{aligned}\mathcal{P}( x, t ) &= \frac{1}{{2}} \mathcal{P}(x + \delta x, t - \delta t)+\frac{1}{{2}} \mathcal{P}(x - \delta x, t - \delta t) \\ &\approx\frac{1}{{2}}\left(\mathcal{P}(x, t - \delta t) + \not{{\frac{\partial {\mathcal{P}}}{\partial {x}}(x, t - \delta t) \delta x}}+\frac{1}{{2}} \frac{\partial^2 {{\mathcal{P}}}}{\partial {{x}}^2}(x, t - \delta t) (\delta x)^2\right)+\frac{1}{{2}}\left(\mathcal{P}(x, t - \delta t) - \not{{\frac{\partial {\mathcal{P}}}{\partial {x}}(x, t - \delta t) \delta x}}+\frac{1}{{2}} \frac{\partial^2 {{\mathcal{P}}}}{\partial {{x}}^2}(x, t - \delta t) (\delta x)^2\right) \\ &= \mathcal{P}(x, t - \delta t) +\frac{1}{{2}} \frac{\partial^2 {{\mathcal{P}}}}{\partial {{x}}^2}(x, t - \delta t) (\delta x)^2\end{aligned} \hspace{\stretch{1}}(1.5)

Since we have a small correction (\delta x)^2 \approx 0, this is approximately

\begin{aligned}\mathcal{P}(x, t) \approx\mathcal{P}(x, t - \delta t) +\frac{1}{{2}} \frac{\partial^2 {{\mathcal{P}}}}{\partial {{x}}^2}(x, t) (\delta x)^2\end{aligned} \hspace{\stretch{1}}(1.1.6)

\begin{aligned}\delta t \frac{\partial {\mathcal{P}}}{\partial {t}}(x, t) \approx \frac{1}{{2}} \frac{\partial^2 {{\mathcal{P}}}}{\partial {{x}}^2}(x, t) (\delta x)^2\end{aligned} \hspace{\stretch{1}}(1.1.6)

\begin{aligned}\frac{\partial {\mathcal{P}}}{\partial {t}}(x, t) =  \underbrace{\frac{1}{{2}} \frac{(\delta x)^2}{\delta t}}_{\equiv D, \mbox{the diffusion constant}} \frac{\partial^2 {{\mathcal{P}}}}{\partial {{x}}^2}(x, t) \end{aligned} \hspace{\stretch{1}}(1.1.8)

We see that random microscopics lead to a well defined macroscopic equation.

\begin{aligned}N \mathcal{P}(x, t) = \underbrace{C(x, t)}_{\mbox{particle density or concentration}}\end{aligned} \hspace{\stretch{1}}(1.1.9)

Continuity equation and Ficks law

Requirements for well defined diffusion results

  • Particle number conservation (local)Imagine that we put a drop of ink in water. We’ll see the ink gradually spread out and appear to disappear. But the particles are still there. We require particle number concentration for well defined diffusion results.

    This is expressed as a continuity equation and illustrated in (Fig 4).

    \begin{aligned}\frac{\partial {\mathcal{P}}}{\partial {t}} + \frac{\partial {J}}{\partial {x}} = 0.\end{aligned} \hspace{\stretch{1}}(1.2.10)

    Fig 4: probability changes by flows through the boundary

     

    Here J is the probability current density, and \mathcal{P} is the probability density.

  • Phenomenological law, or Fick’s law.

    \begin{aligned}J = - D \frac{\partial {\mathcal{P}}}{\partial {x}}\end{aligned} \hspace{\stretch{1}}(1.2.11)

    The probability current is linearly proportional to the gradient of the probability density.

    Fig 5: probability current proportional to gradient

     

Combining the above we have

\begin{aligned}0 = \frac{\partial {\mathcal{P}}}{\partial {t}} + \frac{\partial {}}{\partial {x}} \left( - D \frac{\partial {\mathcal{P}}}{\partial {x}}\right).\end{aligned} \hspace{\stretch{1}}(1.2.12)

So for constant diffusion rates D we also arrive at the random walk diffusion result

\begin{aligned}\boxed{\frac{\partial {\mathcal{P}}}{\partial {t}} = D \frac{\partial^2 {{\mathcal{P}}}}{\partial {{x}}^2} .}\end{aligned} \hspace{\stretch{1}}(1.2.13)

Random walk in velocity space

Fig 6: Single atom velocity view

 

We are imagining that we are following one particle (particle i) in a gas, initially propagating without interaction at some velocity \mathbf{v}_i. After one collision we have

\begin{aligned}\mathbf{v}_i(1) = \mathbf{v}_i(0) + \Delta \mathbf{v}_i(1).\end{aligned} \hspace{\stretch{1}}(1.3.14)

After two collisions

\begin{aligned}\mathbf{v}_i(2) = \mathbf{v}_i(1) + \Delta \mathbf{v}_i(2)\end{aligned} \hspace{\stretch{1}}(1.3.15)

After N_{\mathrm{c}} collisions

\begin{aligned}\mathbf{v}_i(N_{\mathrm{c}}) = \mathbf{v}_i(0) + \Delta \mathbf{v}_i(1) + \Delta \mathbf{v}_i(2) + \cdots+ \Delta \mathbf{v}_i(N_{\mathrm{c}}) \end{aligned} \hspace{\stretch{1}}(1.3.16)

Where N_{\mathrm{c}} is the number of collisions. We expect

\begin{aligned}\left\langle{{\mathbf{v}_i}}\right\rangle = \mathbf{v}_i(0)\end{aligned} \hspace{\stretch{1}}(1.0.17a)

\begin{aligned}\left\langle{{\mathbf{v}_i^2}}\right\rangle \propto N_{\mathrm{c}}\end{aligned} \hspace{\stretch{1}}(1.0.17b)

\begin{aligned}\mathcal{P}_{N_{\mathrm{c}}}(\mathbf{v}) \propto e^{-\frac{(\mathbf{v} - \mathbf{v}_0)^2}{2 N_{\mathrm{c}}}}\end{aligned} \hspace{\stretch{1}}(1.0.17c)

Here the mean is expected to be zero because the individual collisions are thought to be uncorrelated.

We’ll see that there is something wrong with this, and will figure out how to fix this in the next lecture.

In particular, the kinetic energy of any given particle is

\begin{aligned}\text{KE}_i = \frac{1}{{2}} m \mathbf{v}_i^2\end{aligned} \hspace{\stretch{1}}(1.0.18)

and the sum of this is fixed for the complete system.

However, if we sum the kinetic energy squares above 1.0.17b shows that it is growing with the number of collisions. Fixing this we will arrive at the Maxwell Boltzmann distribution.

Posted in Math and Physics Learning. | Tagged: , | Leave a Comment »