Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Posts Tagged ‘phase space’

An updated compilation of notes, for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 3, 2013

In A compilation of notes, so far, for ‘PHY452H1S Basic Statistical Mechanics’ I posted a link this compilation of statistical mechanics course notes.

That compilation now all of the following too (no further updates will be made to any of these) :

February 28, 2013 Rotation of diatomic molecules

February 28, 2013 Helmholtz free energy

February 26, 2013 Statistical and thermodynamic connection

February 24, 2013 Ideal gas

February 16, 2013 One dimensional well problem from Pathria chapter II

February 15, 2013 1D pendulum problem in phase space

February 14, 2013 Continuing review of thermodynamics

February 13, 2013 Lightning review of thermodynamics

February 11, 2013 Cartesian to spherical change of variables in 3d phase space

February 10, 2013 n SHO particle phase space volume

February 10, 2013 Change of variables in 2d phase space

February 10, 2013 Some problems from Kittel chapter 3

February 07, 2013 Midterm review, thermodynamics

February 06, 2013 Limit of unfair coin distribution, the hard way

February 05, 2013 Ideal gas and SHO phase space volume calculations

February 03, 2013 One dimensional random walk

February 02, 2013 1D SHO phase space

February 02, 2013 Application of the central limit theorem to a product of random vars

January 31, 2013 Liouville’s theorem questions on density and current

January 30, 2013 State counting

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment »

One dimensional well problem from Pathria chapter II

Posted by peeterjoot on February 16, 2013

[Click here for a PDF of this post with nicer formatting]

Problem 2.5 [2] asks to show that

\begin{aligned}\oint p dq = \left( { n + \frac{1}{{2}} } \right) h,\end{aligned} \hspace{\stretch{1}}(1.0.1)

provided the particle’s potential is such that

\begin{aligned}m \hbar \left\lvert { \frac{dV}{dq} } \right\rvert \ll \left( { m ( E - V ) } \right)^{3/2}.\end{aligned} \hspace{\stretch{1}}(1.0.2)

I took a guess that this was actually the WKB condition

\begin{aligned}\frac{k'}{k^2} \ll 1,\end{aligned} \hspace{\stretch{1}}(1.0.3)

where the WKB solution was of the form

\begin{aligned}k^2(q) = 2 m (E - V(q))/\hbar^2\end{aligned} \hspace{\stretch{1}}(1.0.4a)

\begin{aligned}\psi(q) = \frac{1}{{\sqrt{k}}} e^{\pm i \int k(q) dq}.\end{aligned} \hspace{\stretch{1}}(1.0.4b)

The WKB validity condition is

\begin{aligned}1 \gg \frac{-2 m V'}{\hbar} \frac{1}{{2}} \frac{1}{{\sqrt{2 m (E - V)}}} \frac{\hbar^2}{2 m(E - V)}\end{aligned} \hspace{\stretch{1}}(1.0.5)


\begin{aligned}m \hbar \left\lvert {V'} \right\rvert \ll \left( {2 m (E - V)} \right)^{3/2}.\end{aligned} \hspace{\stretch{1}}(1.0.6)

This differs by a factor of 2 \sqrt{2} from the constraint specified in the problem, but I’m guessing that constant factors of that sort have just been dropped.

Even after figuring out that this question was referring to WKB, I didn’t know what to make of the oriented integral \int p dq. With p being an operator in the QM context, what did this even mean. I found the answer in [1] section 12.12. Here p just means

\begin{aligned}p(q) = \hbar k(q),\end{aligned} \hspace{\stretch{1}}(1.0.7)

where k(q) is given by eq. 1.0.4a. The rest of the problem can also be found there and relies on the WKB connection formulas, which aren’t derived in any text that I own. Quoting results based on other results that I don’t know the origin of it’s worthwhile, so that’s as far as I’ll attempt this question (but do plan to eventually look up and understand those WKB connection formulas, and then see how they can be applied in a problem like this).


[1] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

[2] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

Posted in Math and Physics Learning. | Tagged: , , , , , | 2 Comments »

1D pendulum problem in phase space

Posted by peeterjoot on February 15, 2013

[Click here for a PDF of this post with nicer formatting]

Problem 2.6 in [1] asks for some analysis of the (presumably small angle) pendulum problem in phase space, including an integration of the phase space volume energy and period of the system to the area A included within a phase space trajectory. With coordinates as in fig. 1.1, our Lagrangian is


Fig 1.1: 1d pendulum

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m l^2 \dot{\theta}^2 - g m l ( 1 - \cos\theta ).\end{aligned} \hspace{\stretch{1}}(1.0.1)

As a sign check we find for small \theta from the Euler-Lagrange equations \dot{d}{\theta} = -(g/l) \theta as expected. For the Hamiltonian, we need the canonical momentum

\begin{aligned}p_\theta = \frac{\partial {\mathcal{L}}}{\partial {\dot{\theta}}} = m l^2 \dot{\theta}.\end{aligned} \hspace{\stretch{1}}(1.0.2)

Observe that this canonical momentum does not have dimensions of momentum, but that of angular momentum (m l \dot{\theta} \times l).

Our Hamiltonian is

\begin{aligned}H = \frac{1}{{2 m l^2}} p_\theta^2 + g m l ( 1 - \cos\theta ).\end{aligned} \hspace{\stretch{1}}(1.0.3)

Hamilton’s equations for this system, in matrix form are

\begin{aligned}\frac{d{{}}}{dt}\begin{bmatrix}\theta \\ p_\theta\end{bmatrix}=\begin{bmatrix}\frac{\partial {H}}{\partial {p_\theta}} \\ -\frac{\partial {H}}{\partial {\theta}} \end{bmatrix}=\begin{bmatrix}p_\theta/m l^2 \\ - g m l \sin\theta\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.4)

With \omega = g/l, it is convient to non-dimensionalize this

\begin{aligned}\frac{d{{}}}{dt}\begin{bmatrix}\theta \\ p_\theta/ \omega m l^2\end{bmatrix}=\omega\begin{bmatrix}p_\theta/\omega m l^2 \\ - \sin\theta\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.5)

Now we can make the small angle approximation. Writing

\begin{aligned}\mathbf{u} = \begin{bmatrix}\theta \\ p_\theta/ \omega m l^2\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.6a)

\begin{aligned}i = \begin{bmatrix}0 & 1 \\ -1 & 0\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.6b)

Our pendulum equation is reduced to

\begin{aligned}\mathbf{u}' = i \omega \mathbf{u},\end{aligned} \hspace{\stretch{1}}(1.0.7)

With a solution that we can read off by inspection

\begin{aligned}\mathbf{u} = e^{i \omega t} \mathbf{u}_0=\begin{bmatrix}\cos\omega t & \sin\omega t \\ -\sin\omega t & \cos \omega t\end{bmatrix}\mathbf{u}_0\end{aligned} \hspace{\stretch{1}}(1.0.8)

Let’s put the initial phase space point into polar form

\begin{aligned}\mathbf{u}_0^2= \theta_0^2 + \frac{p_0^2}{\omega^2 m^2 l^4}= \frac{2}{\omega^2 m l^2}\left( { \frac{p_0^2}{2 m l^2} + \frac{1}{{2}} \omega^2 m l^2 \theta_0^2 } \right)=\frac{2}{g m l}\left( { \frac{p_0^2}{2 m l^2} + \frac{1}{{2}} g m l \theta_0^2 } \right)\end{aligned} \hspace{\stretch{1}}(1.0.9)

This doesn’t appear to be an exact match for eq. 1.0.3, but we can write for small \theta_0

\begin{aligned}1 - \cos\theta_0=2 \sin^2 \left( { \frac{\theta_0}{2} } \right)\approx2 \left( { \frac{\theta_0}{2} } \right)^2=\frac{\theta_0^2}{2}.\end{aligned} \hspace{\stretch{1}}(1.0.10)

This shows that we can rewrite our initial conditions as

\begin{aligned}\mathbf{u}_0 = \sqrt{ \frac{2 E}{g m l} }e^{i \phi }\begin{bmatrix}1 \\ 0\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.11)


\begin{aligned}\tan \phi =\left( { \omega m l^2 \theta_0/ p_0 } \right).\end{aligned} \hspace{\stretch{1}}(1.0.12)

Our time evolution in phase space is given by

\begin{aligned}\begin{bmatrix}\theta(t) \\ p_\theta(t)\end{bmatrix}=\sqrt{ \frac{2 E}{g m l} }\begin{bmatrix}\cos(\omega t + \phi) \\ - \omega m l^2\sin(\omega t + \phi)\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.14)


\begin{aligned}\boxed{\begin{bmatrix}\theta(t) \\ p_\theta(t)\end{bmatrix}=\frac{1}{{\omega l}}\sqrt{ \frac{2 E}{m} }\begin{bmatrix}\cos(\omega t + \phi) \\ - \omega m l^2\sin(\omega t + \phi)\end{bmatrix}.}\end{aligned} \hspace{\stretch{1}}(1.0.14)

This is plotted in fig. 1.2.


Fig 1.2: Phase space trajectory for small angle pendulum

The area of this ellipse is

\begin{aligned}A = \pi \frac{1}{{\omega^2 l^2}} \frac{2 E}{m} \omega m l^2 = \frac{2 \pi}{\omega} E.\end{aligned} \hspace{\stretch{1}}(1.0.15)

With \tau for the period of the trajectory, this is

\begin{aligned}A = \tau E.\end{aligned} \hspace{\stretch{1}}(1.0.16)

As a final note, observe that the oriented integral from problem 2.5 of the text \oint p_\theta d\theta, is also this area. This is a general property, which can be seen geometrically in fig. 1.3, where we see that the counterclockwise oriented integral of \oint p dq would give the negative area. The integrals along the c_4, c_1 paths give the area under the blob, whereas the integrals along the other paths where the sense is opposite, give the complete area under the top boundary. Since they are oppositely sensed, adding them gives just the area of the blob.


Fig 1.3: Area from oriented integral along path

Let’s do this \oint p_\theta d\theta integral for the pendulum phase trajectories. With

\begin{aligned}\theta = \frac{1}{{\omega l}} \sqrt{\frac{2 E}{m}} \cos(\omega t + \phi)\end{aligned} \hspace{\stretch{1}}(1.0.17a)

\begin{aligned}p_\theta = -m l \sqrt{\frac{2 E}{m}} \sin(\omega t + \phi)\end{aligned} \hspace{\stretch{1}}(1.0.17b)

We have

\begin{aligned}\oint p_\theta d\theta = \frac{m l}{\omega l} \frac{2 E}{m} \int_0^{2\pi/\omega} \sin^2( \omega t + \phi) \omega dt= 2 E \int_0^{2\pi/\omega} \frac{ 1 - \cos\left( { 2(\omega t + \phi) } \right) }{2} dt= E \frac{2 \pi}{\omega} = E \tau.\end{aligned} \hspace{\stretch{1}}(1.0.18)


[1] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , | Leave a Comment »

Cartesian to spherical change of variables in 3d phase space

Posted by peeterjoot on February 11, 2013

[Click here for a PDF of this post with nicer formatting]

Question: Cartesian to spherical change of variables in 3d phase space

[1] problem 2.2 (a). Try a spherical change of vars to verify explicitly that phase space volume is preserved.


Our kinetic Lagrangian in spherical coordinates is

\begin{aligned}\mathcal{L} &= \frac{1}{{2}} m \left(  \dot{r} \hat{\mathbf{r}} + r \sin\theta \dot{\phi} \hat{\boldsymbol{\phi}} + r \dot{\theta} \hat{\boldsymbol{\theta}} \right)^2 \\ &= \frac{1}{{2}} m \left(  \dot{r}^2 + r^2 \sin^2\theta \dot{\phi}^2 + r^2 \dot{\theta}^2  \right)^2\end{aligned} \hspace{\stretch{1}}(1.0.1)

We read off our canonical momentum

\begin{aligned}p_r &= \frac{\partial {\mathcal{L}}}{\partial {r}} \\ &= m \dot{r}\end{aligned} \hspace{\stretch{1}}(1.0.2a)

\begin{aligned}p_\theta &= \frac{\partial {\mathcal{L}}}{\partial {\theta}} \\ &= m r^2 \dot{\theta}\end{aligned} \hspace{\stretch{1}}(1.0.2b)

\begin{aligned}p_\phi &= \frac{\partial {\mathcal{L}}}{\partial {\phi}} \\ &= m r^2 \sin^2\theta \dot{\phi},\end{aligned} \hspace{\stretch{1}}(1.0.2c)

and can now express the Hamiltonian in spherical coordinates

\begin{aligned}H &= \frac{1}{{2}} m \left(\left( \frac{p_r}{m} \right)^2+ r^2 \sin^2\theta \left( \frac{p_\phi}{m r^2 \sin^2\theta} \right)+ r^2 \left( \frac{p_\theta}{m r^2} \right)\right) \\ &= \frac{p_r^2}{2m} + \frac{p_\phi^2}{2 m r^2 \sin^2\theta} + \frac{p_\theta^2}{2 m r^2}\end{aligned} \hspace{\stretch{1}}(1.0.3)

Now we want to do a change of variables. The coordinates transform as

\begin{aligned}x = r \sin\theta \cos\phi\end{aligned} \hspace{\stretch{1}}(1.0.4a)

\begin{aligned}y = r \sin\theta \sin\phi\end{aligned} \hspace{\stretch{1}}(1.0.4b)

\begin{aligned}z = r \cos\theta,\end{aligned} \hspace{\stretch{1}}(1.0.4c)


\begin{aligned}r = \sqrt{x^2 + y^2 + z^2}\end{aligned} \hspace{\stretch{1}}(1.0.5a)

\begin{aligned}\theta = \arccos(z/r)\end{aligned} \hspace{\stretch{1}}(1.0.5b)

\begin{aligned}\phi = \arctan(y/x).\end{aligned} \hspace{\stretch{1}}(1.0.5c)

It’s not too hard to calculate the change of variables for the momenta (verified in sphericalPhaseSpaceChangeOfVars.nb). We have

\begin{aligned}p_r = \frac{x p_x + y p_y + z p_z}{\sqrt{x^2 + y^2 + z^2}}\end{aligned} \hspace{\stretch{1}}(1.0.6a)

\begin{aligned}p_\theta = \frac{(p_x x + p_y y) z - p_z (x^2 + y^2)}{\sqrt{x^2 + y^2}}\end{aligned} \hspace{\stretch{1}}(1.0.6b)

\begin{aligned}p_\phi = x p_y - y p_x\end{aligned} \hspace{\stretch{1}}(1.0.6c)

Now let’s compute the volume element in spherical coordinates. This is

\begin{aligned}d\omega &= dr d\theta d\phi p_r p_\theta p_\phi \\ &= \frac{\partial(r, \theta, \phi, p_r, p_\theta, p_\phi)}{\partial(x, y, z, p_x, p_y, p_z)}dx dy dz dp_x dp_y dp_z \\ &= \begin{vmatrix} \frac{x}{\sqrt{x^2+y^2+z^2}} & \frac{y}{\sqrt{x^2+y^2+z^2}} & \frac{z}{\sqrt{x^2+y^2+z^2}} & 0 & 0 & 0 \\  \frac{x z}{\sqrt{x^2+y^2} \left(x^2+y^2+z^2\right)} & \frac{y z}{\sqrt{x^2+y^2} \left(x^2+y^2+z^2\right)} & -\frac{\sqrt{x^2+y^2}}{x^2+y^2+z^2} & 0 & 0 & 0 \\  -\frac{y}{x^2+y^2} & \frac{x}{x^2+y^2} & 0 & 0 & 0 & 0 \\  \frac{\left(y^2+z^2\right) p_x-x y p_y-x z p_z}{\left(x^2+y^2+z^2\right)^{3/2}} & \frac{\left(x^2+z^2\right) p_y-y \left(x p_x+z p_z\right)}{\left(x^2+y^2+z^2\right)^{3/2}} & \frac{\left(x^2+y^2\right) p_z-z \left(x p_x+y p_y\right)}{\left(x^2+y^2+z^2\right)^{3/2}} & \frac{x}{\sqrt{x^2+y^2+z^2}} & \frac{y}{\sqrt{x^2+y^2+z^2}} & \frac{z}{\sqrt{x^2+y^2+z^2}} \\  \frac{y z \left(y p_x-x p_y\right)-x \left(x^2+y^2\right) p_z}{\left(x^2+y^2\right)^{3/2}} & \frac{x z \left(x p_y-y p_x\right)-y \left(x^2+y^2\right) p_z}{\left(x^2+y^2\right)^{3/2}} & \frac{x p_x+y p_y}{\sqrt{x^2+y^2}} & \frac{x z}{\sqrt{x^2+y^2}} & \frac{y z}{\sqrt{x^2+y^2}} & -\sqrt{x^2+y^2} \\  p_y & -p_x & 0 & -y & x & 0 \\ \end{vmatrix}dx dy dz dp_x dp_y dp_z \\ &= dx dy dz dp_x dp_y dp_z\end{aligned} \hspace{\stretch{1}}(1.0.7)

This also has a unit determinant, as we found in the similar cylindrical change of phase space variables.


[1] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »

Change of variables in 2d phase space

Posted by peeterjoot on February 10, 2013

[Click here for a PDF of this post with nicer formatting]


In [1] problem 2.2, it’s suggested to try a spherical change of vars to verify explicitly that phase space volume is preserved, and to explore some related ideas. As a first step let’s try a similar, but presumably easier change of variables, going from Cartesian to cylindrical phase spaces.

Canonical momenta and Hamiltonian

Our cylindrical velocity is

\begin{aligned}\mathbf{v} = \dot{r} \hat{\mathbf{r}} + r \dot{\theta} \hat{\boldsymbol{\theta}},\end{aligned} \hspace{\stretch{1}}(1.2.1)

so a purely kinetic Lagrangian would be

\begin{aligned}\mathcal{L} &= \frac{1}{{2}} m \mathbf{v}^2 \\ &= \frac{1}{{2}} m \left( \dot{r}^2 + r^2 \dot{\theta}^2 \right).\end{aligned} \hspace{\stretch{1}}(1.2.2)

Our canonical momenta are


\begin{aligned}p_r &= \frac{\partial {\mathcal{L}}}{\partial {\dot{r}}} \\ &= m \dot{r}\end{aligned} \hspace{\stretch{1}}(1.0.3a)

\begin{aligned}p_\theta &= \frac{\partial {\mathcal{L}}}{\partial {\dot{\theta}}} \\ &= m r^2 \dot{\theta}.\end{aligned} \hspace{\stretch{1}}(1.0.3b)


and our kinetic energy is

\begin{aligned}H &= \mathcal{L} \\ &= \frac{1}{{2m}} p_r^2 + \frac{1}{{2 m r^2}} p_\theta^2.\end{aligned} \hspace{\stretch{1}}(1.0.4)

Now we need to express our momenta in terms of the Cartesian coordinates. We have for the radial momentum

\begin{aligned}p_r &= m \dot{r} \\ &= m \frac{d{{}}}{dt} \sqrt{x^2 + y^2} \\ &= \frac{1}{{2}} \frac{2 m}{r} \left(  x \dot{x} + y \dot{y}  \right)\end{aligned} \hspace{\stretch{1}}(1.0.5)


\begin{aligned}p_r = \frac{1}{{r}} \left(  x p_x + y p_y  \right)\end{aligned} \hspace{\stretch{1}}(1.0.6)

\begin{aligned}p_\theta &= m r^2 \frac{d{{\theta}}}{dt} \\ &= m r^2 \frac{d{{}}}{dt} \arctan \left( \frac{y}{x} \right).\end{aligned} \hspace{\stretch{1}}(1.0.7)

After some reduction (cyclindrialMomenta.nb), we find

\begin{aligned}p_\theta = p_y x - p_x y.\end{aligned} \hspace{\stretch{1}}(1.0.8)

We can assemble these into a complete set of change of variable equations


\begin{aligned}r = \sqrt{x^2 + y^2}\end{aligned} \hspace{\stretch{1}}(1.0.9a)

\begin{aligned}\theta = \arctan\left( \frac{y}{x} \right)\end{aligned} \hspace{\stretch{1}}(1.0.9b)

\begin{aligned}p_r = \frac{1}{{\sqrt{x^2 + y^2}}} \left(  x p_x + y p_y  \right)\end{aligned} \hspace{\stretch{1}}(1.0.9c)

\begin{aligned}p_\theta = p_y x - p_x y.\end{aligned} \hspace{\stretch{1}}(1.0.9d)


Our phase space volume element change of variables is

\begin{aligned}dr d\theta dp_r dp_\theta &= \frac{\partial(r, \theta, p_r, p_\theta)}{\partial(x, y, p_x, p_y)}dx dy dp_x dp_y \\ &= \begin{vmatrix} \frac{x}{\sqrt{x^2+y^2}} & \frac{y}{\sqrt{x^2+y^2}} & 0 & 0 \\  -\frac{y}{x^2+y^2} & \frac{x}{x^2+y^2} & 0 & 0 \\  \frac{y \left(y p_x-x p_y\right)}{\left(x^2+y^2\right)^{3/2}} & \frac{x \left(x p_y-y p_x\right)}{\left(x^2+y^2\right)^{3/2}} & \frac{x}{\sqrt{x^2+y^2}} & \frac{y}{\sqrt{x^2+y^2}} \\  p_y & -p_x & -y & x \end{vmatrix}dx dy dp_x dp_y \\ &= \frac{x^2 + y^2}{\left(x^2 + y^2\right)^{3/2}}\frac{x^2 + y^2}{\left(x^2 + y^2\right)^{1/2}} \\ &= dx dy dp_x dp_y.\end{aligned} \hspace{\stretch{1}}(1.0.10)

We see explicitly that this point transformation has a unit Jacobian, preserving area.


[1] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996.

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »

Liouville’s theorem questions on density and current

Posted by peeterjoot on February 5, 2013

[Click here for a PDF of this post with nicer formatting and figures if the post had any]

Liouville’s theorem questions on density and current

In the midterm we were asked to state and prove Liouville’s theorem. I couldn’t remember the proof, having only a recollection that it had something to do with the continuity equation

\begin{aligned}0 = \frac{\partial {\rho}}{\partial {t}} + \frac{\partial {j}}{\partial {x}},\end{aligned} \hspace{\stretch{1}}(1.1.1)

but unfortunately couldn’t remember what the j was. Looking up the proof, it’s actually really simple, just the application of chain rule for a function \rho that’s presumed to be a function of time, position and momentum variables. It didn’t appear to me that this proof has anything to do with any sort of notion of density, so I posed the following questions.


The core of the proof can be distilled to one dimension, removing all the indexes that obfuscate what’s being one. For that case, application of the chain rule to a function \rho(t, x, p), we have

\begin{aligned}\frac{d{{\rho}}}{dt} &= \frac{\partial {\rho}}{\partial {t}} + \frac{\partial {x}}{\partial {t}} \frac{\partial {\rho}}{\partial {x}} + \frac{\partial {p}}{\partial {t}} \frac{\partial {\rho}}{\partial {p}} \\ &= \frac{\partial {\rho}}{\partial {t}} + \dot{x} \frac{\partial {\rho}}{\partial {x}} + \dot{p} \frac{\partial {\rho}}{\partial {p}} \\ &= \frac{\partial {\rho}}{\partial {t}} + \frac{\partial {\left( \dot{x} \rho \right)}}{\partial {x}} + \frac{\partial {\left( \dot{x} \rho \right)}}{\partial {p}} - \rho \left(  \frac{\partial {\dot{x}}}{\partial {x}} + \frac{\partial {\dot{p}}}{\partial {p}}  \right) \\ &= \frac{\partial {\rho}}{\partial {t}} + \frac{\partial {\left( \dot{x} \rho \right)}}{\partial {x}} + \frac{\partial {\left( \dot{x} \rho \right)}}{\partial {p}} - \rho \underbrace{\left(\frac{\partial {}}{\partial {x}} \left(  \frac{\partial {H}}{\partial {p}}  \right)+\frac{\partial {}}{\partial {p}} \left(  -\frac{\partial {H}}{\partial {x}}  \right)\right)}_{= 0}\end{aligned} \hspace{\stretch{1}}(1.1.2)

Wrong interpretation

From this I’d thought that the theorem was about steady states. If we do have a steady state, where d\rho/dt = 0 we have

\begin{aligned}0 = \frac{\partial {\rho}}{\partial {t}} + \frac{\partial {\left( \dot{x} \rho \right)}}{\partial {x}} + \frac{\partial {\left( \dot{p} \rho \right)}}{\partial {p}}.\end{aligned} \hspace{\stretch{1}}(1.1.3)

That would answer the question of what the current is, it’s this tuple

\begin{aligned}\mathbf{j} = \rho (\dot{x}, \dot{p}),\end{aligned} \hspace{\stretch{1}}(1.1.4)

so if we introduce a “phase space” gradient

\begin{aligned}\boldsymbol{\nabla} = \left(  \frac{\partial {}}{\partial {x}}, \frac{\partial {}}{\partial {p}}  \right)\end{aligned} \hspace{\stretch{1}}(1.1.5)

we’ve got something that looks like a continuity equation

\begin{aligned}0 = \frac{\partial {\rho}}{\partial {t}} + \boldsymbol{\nabla} \cdot \mathbf{j}.\end{aligned} \hspace{\stretch{1}}(1.1.6)

Given this misinterpretation of the theorem, I had the following two questions

  • This function \rho appears to be pretty much arbitrary. I don’t see how this connects to any notion of density?
  • If we pick a specific Hamiltonian, say the 1D SHO, what physical interpretation do we have for this “current” \mathbf{j}?

The clarification

Asking about this, the response was “Actually, equation 1.1.3 has to be assumed for the proof. This equation holds if \rho is the phase space density and since the pair in 1.1.4 is the current density in phase space. The theorem then states that d\rho/dt = 0 whether or not one is in the steady state. This means even as the system is evolving in time, if we sit on a particular phase space point and follow it around as it evolves, the density in our neighborhood will be a constant.”

Posted in Math and Physics Learning. | Tagged: , , , , , , | Leave a Comment »

The continuity equation for phase space density

Posted by peeterjoot on February 5, 2013

[Click here for a PDF of this post with nicer formatting]

Thinking back to the origin of the 3D continuity equation from fluid mechanics, we used the geometrical argument that any change to the mass in a volume had to leave through the boundary. This was

\begin{aligned}\frac{\partial }{\partial t} \int_V \rho dV = -\int_{\partial V} \left( \rho \mathbf{u} \right) \cdot \hat{\mathbf{n}} dA= -\int_{V} \boldsymbol{\nabla} \cdot \left( \rho \mathbf{u} \right) dV.\end{aligned} \hspace{\stretch{1}}(1.0.1)

We used Green’s theorem above, allowing us to write, provided the volume is fixed

\begin{aligned}0 =\int_V \left( \frac{\partial {\rho}}{\partial t} + \boldsymbol{\nabla} \cdot \left( \rho \mathbf{u} \right) \right)dV,\end{aligned} \hspace{\stretch{1}}(1.0.2)


\begin{aligned}0 =\frac{\partial {\rho}}{\partial t} + \boldsymbol{\nabla} \cdot \left( \rho \mathbf{u} \right).\end{aligned} \hspace{\stretch{1}}(1.0.3)

Consider the following phase space picture (Fig1).  The time evolution of any individual particle (or set of particles that lie in the same element of phase space) is directed in the direction (\dot{x}, \dot{p}). So, the phase space density leaving through the surface is in proportion to the normal component of \mathbf{j} = \rho (\dot{q}, \dot{p}) (red in the figure).

Fig1: Phase space current


With this geometrical picture in mind, the 6N dimensional phase space equivalent of 1.0.1, and a basis \mathbf{e}_{i_{\alpha}} for the positions q_{i_\alpha} and \mathbf{f}_{i_\alpha} for the momentum components p_{i_\alpha} is then

\begin{aligned}\frac{\partial }{\partial t} \int_{V_{6N}} \rho d^{3N} q d^{3N} p &= -\int_{\partial V_{6N}} \rho \sum_{i_\alpha} \left( \mathbf{e}_{i_\alpha} \dot{q}_{i_\alpha} + \mathbf{f}_{i_\alpha} \dot{p}_{i_\alpha} \right) \cdot \hat{\mathbf{n}} dA \\ &= -\int_{V_{6N}} d^{3N} q d^{3N} p \sum_{i_\alpha} \left( \frac{ \partial \left( \rho \dot{q}_{i_\alpha} \right) }{\partial q_{i_\alpha} } + \frac{ \partial \left( \rho \dot{p}_{i_\alpha} \right) }{\partial p_{i_\alpha} } \right)\end{aligned} \hspace{\stretch{1}}(1.0.5)

Here dA is the surface area element for the phase space and \hat{\mathbf{n}} is the unit normal to this surface. We have to assume the existance of a divergence theorem for the 6N dimensional space.

We can now regroup, and find for the integrand

\begin{aligned} 0 = \frac{\partial \rho}{\partial t} + \sum_{i_\alpha} \left( \frac{\partial \left( \rho \dot{q}_{i_\alpha} \right) }{ \partial q_{i_\alpha} } + \frac{\partial \left( \rho \dot{p}_{i_\alpha} \right) }{ \partial p_{i_\alpha} } \right),\end{aligned} \hspace{\stretch{1}}(1.0.5)

which is the \underlineAndIndex{continuity equation}. The assumptions that we have to make are that the flow of the density in phase space through the surface is proportional to the projection of the vector \rho (\dot{q}_{i_\alpha}, \dot{p}_{i_\alpha}), and then use the same old arguments (extended to a 6N dimensional space) as we did for the continuity equation for 3D masses.

Posted in Math and Physics Learning. | Tagged: , , , , , , , | Leave a Comment »

1D SHO phase space

Posted by peeterjoot on February 2, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Let’s review the 1D SHO to get a better feel for the ideas of phase space. Given a spring and mass system

\begin{aligned}F = - k x = -(\boldsymbol{\nabla} \phi)_x\end{aligned} \hspace{\stretch{1}}(1.0.1)

our potential is

\begin{aligned}\phi = \frac{1}{{2}} k x^2,\end{aligned} \hspace{\stretch{1}}(1.0.2)

So, our Hamiltonian is

\begin{aligned}H = \frac{1}{{2m}}{p^2} + \frac{1}{{2}} k x^2.\end{aligned} \hspace{\stretch{1}}(1.0.3)

Hamilton’s equations follow from H = p \dot{x} - \mathcal{L}

\begin{aligned}\frac{\partial {H}}{\partial {p}} = \dot{x}\end{aligned} \hspace{\stretch{1}}(1.0.4a)

\begin{aligned}\frac{\partial {H}}{\partial {x}} = -\dot{p}.\end{aligned} \hspace{\stretch{1}}(1.0.4b)

For the SHO this is

\begin{aligned}\frac{d{{}}}{dt}\begin{bmatrix}p \\ x\end{bmatrix}=\begin{bmatrix}-\frac{\partial {H}}{\partial {x}} \\ \frac{\partial {H}}{\partial {p}} \end{bmatrix}=\begin{bmatrix} - k x \\ p/m\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.5)

It’s convenient to non-dimensionalize this. Using \omega = \sqrt{k/m}, which has dimensions of 1/T, we form

\begin{aligned}\frac{d{{}}}{dt}\begin{bmatrix}p/m \\ \omega x\end{bmatrix}=\begin{bmatrix} - (k/m) x \\ (\omega) p/m\end{bmatrix}=\omega\begin{bmatrix} - \omega x \\ p/m\end{bmatrix}=\omega\begin{bmatrix}0 & -1 \\ 1 & 0\end{bmatrix}\begin{bmatrix}p/m \\ \omega x\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.6)

With definitions

\begin{aligned}i = \begin{bmatrix}0 & -1 \\ 1 & 0\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.7a)

\begin{aligned}\mathbf{x} =\begin{bmatrix}p/m \\ \omega x\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.7b)

the SHO Hamilton’s equations are just

\begin{aligned}\boxed{\mathbf{x}' = i \omega \mathbf{x}.}\end{aligned} \hspace{\stretch{1}}(1.0.8)

The solution follows immediately

\begin{aligned}\mathbf{x} = e^{i \omega t} \mathbf{x}_0.\end{aligned} \hspace{\stretch{1}}(1.0.9)

We expect matrix exponential to have the structure of a rotation matrix, so let’s write it out explicitly to see its structure

\begin{aligned}e^{i \omega t} = I \cos(\omega t)+ i \sin(\omega t)=\begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}\cos(\omega t)+ \begin{bmatrix}0 & -1 \\ 1 & 0\end{bmatrix}\sin(\omega t)=\begin{bmatrix}\cos(\omega t) & - \sin(\omega t) \\ \sin(\omega t) & \cos(\omega t)\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.10)

In this non-dimensionalized phase space, with p/m on the horizontal axis and \omega x on the vertical axis, this is a counterclockwise rotation. The (squared) radius of the rotation is

\begin{aligned}(p_0/m)^2+(\omega x_0)^2 = \frac{2}{m}\left( {\frac{p_0^2}{m}+ \frac{1}{{2}} \omega^2 m x_0^2} \right)= \frac{2}{m}\left( {\frac{p_0^2}{2m}+ \frac{1}{{2}} k x_0^2} \right)=\frac{2 E}{m}.\end{aligned} \hspace{\stretch{1}}(1.0.11)

It makes sense to put the initial position in phase space in polar form too. We can write

\begin{aligned}\begin{bmatrix}p_0/m \\ \omega x_0\end{bmatrix}=\sqrt{\frac{2 E}{m}}e^{i \theta}\begin{bmatrix}1 \\ 0\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.12)


\begin{aligned}\theta = \arctan \left( { \omega m \frac{x_0}{p_0} } \right).\end{aligned} \hspace{\stretch{1}}(1.0.13)

Now the non-dimensionalized phase space solution takes the particularly simple form

\begin{aligned}\boxed{\mathbf{x} = \sqrt{ \frac{2 E}{m} } e^{i (\omega t + \theta)} \begin{bmatrix}1 \\ 0\end{bmatrix}.}\end{aligned} \hspace{\stretch{1}}(1.0.14)

Removing the non-dimensionalization

Written explicitly, our momentum and position trace out, elliptical trajectories

\begin{aligned}p = p_0 \cos(\omega t) - \omega m x_0 \sin(\omega t) \end{aligned} \hspace{\stretch{1}}(1.0.15a)

\begin{aligned}x = \frac{p_0}{m \omega} \sin(\omega t) + x_0 \cos(\omega t).\end{aligned} \hspace{\stretch{1}}(1.0.15b)

With the initial phase space point specified as a rotation from the momentum axis as in 1.0.13, this is just

\begin{aligned}p = \sqrt{ \frac{2 E}{m} } m \cos(\omega t + \theta) = \sqrt{ 2 m E } \cos(\omega t + \theta) \end{aligned} \hspace{\stretch{1}}(1.0.16a)

\begin{aligned}x = \sqrt{ \frac{2 E}{m} } \frac{1}{{\omega}} \sin(\omega t + \theta) = \sqrt{ \frac{2 E}{k} } \sin(\omega t + \theta) \end{aligned} \hspace{\stretch{1}}(1.0.16b)

This is plotted in (Fig 1)


Fig 1: A SHO phase space trajectory

Observe that the rotation angle \theta doesn’t specify a geometric rotation of the ellipse. Instead, it is a function of the starting point of the elliptical trajectory through phase space.

Aside. Complex representation of phase space points

It’s interesting to note that we can also work in a complex representation of phase space, instead of a matrix picture (for this 1D SHO problem).

\begin{aligned}\frac{d{{}}}{dt} \left( {\frac{p}{m} + i \omega x} \right)=\omega \left( {- \omega x + i \frac{p}{m}} \right)=i \omega \left( {\frac{p}{m} + i \omega x} \right).\end{aligned} \hspace{\stretch{1}}(1.0.17)


\begin{aligned}z = \frac{p}{m} + i \omega x, \end{aligned} \hspace{\stretch{1}}(1.0.18)

Hamilton’s equations take the form

\begin{aligned}z' = i \omega z.\end{aligned} \hspace{\stretch{1}}(1.0.19)

Again, we can read off the solution by inspection

\begin{aligned}z = e^{i \omega t} z_0.\end{aligned} \hspace{\stretch{1}}(1.0.20)

Posted in Math and Physics Learning. | Tagged: , , , , | Leave a Comment »

PHY452H1S Basic Statistical Mechanics. Lecture 6: Volumes in phase space. Taught by Prof. Arun Paramekanti

Posted by peeterjoot on January 29, 2013

[Click here for a PDF of this post with nicer formatting]


Peeter’s lecture notes from class. May not be entirely coherent.

Liouville’s theorem

We’ve looked at the continuity equation of phase space density

\begin{aligned}0 = \frac{\partial {\rho}}{\partial {t}} + \sum_{i_\alpha} \left(\frac{\partial {}}{\partial {p_{i_\alpha}}} \left( \rho \dot{p}_{i_\alpha} \right) + \frac{\partial {\left( \rho \dot{x}_{i_\alpha} \right) }}{\partial {x_{i_\alpha}}}\right)\end{aligned} \hspace{\stretch{1}}(1.2.1)

which with

\begin{aligned}\frac{\partial {\dot{p}_{i_\alpha}}}{\partial {p_{i_\alpha}}} + \frac{\partial {\dot{x}_{i_\alpha}}}{\partial {x_{i_\alpha}}} = 0\end{aligned} \hspace{\stretch{1}}(1.2.2)

led us to Liouville’s theorem

\begin{aligned}\\ boxed{\frac{d{{\rho}}}{dt}(x, p, t) = 0}.\end{aligned} \hspace{\stretch{1}}(1.2.3)

We define Ergodic, meaning that with time, as you wait for t \rightarrow \infty, all available phase space will be covered. Not all systems are necessarily ergodic, but the hope is that all sufficiently complicated systems will be so.

We hope that

\begin{aligned}\rho(x, p, t \rightarrow \infty) \implies \frac{\partial {\rho}}{\partial {t}} = 0 \qquad \mbox{in steady state}\end{aligned} \hspace{\stretch{1}}(1.2.4)

In particular for \rho = \text{constant}, we see that our continuity equation 1.2.1 results in 1.2.2.

For example in a SHO system with a cyclic phase space, as in (Fig 1).

Fig 1: Phase space volume trajectory


\begin{aligned}\left\langle{{A}}\right\rangle = \frac{1}{{\tau}} \int_0^\tau dt A( x_0(t), p_0(t) ),\end{aligned} \hspace{\stretch{1}}(1.2.5)

or equivalently with an ensemble average, imagining that we are averaging over a number of different systems

\begin{aligned}\left\langle{{A}}\right\rangle = \frac{1}{{\tau}} \int dx dp A( x, p ) \underbrace{\rho(x, p)}_{\text{constant}}\end{aligned} \hspace{\stretch{1}}(1.2.6)

If we say that

\begin{aligned}\rho(x, p) = \text{constant} = \frac{1}{{\Omega}},\end{aligned} \hspace{\stretch{1}}(1.2.7)

so that

\begin{aligned}\left\langle{{A}}\right\rangle = \frac{1}{{\Omega}} \int dx dp A( x, p ) \end{aligned} \hspace{\stretch{1}}(1.2.8)

then what is this constant. We fix this by the constraint

\begin{aligned}\int dx dp \rho(x, p) = 1\end{aligned} \hspace{\stretch{1}}(1.2.9)

So, \Omega is the allowed “volume” of phase space, the number of states that the system can take that is consistent with conservation of energy.

What’s the probability for a given configuration. We’ll have to enumerate all the possible configurations. For a coin toss example, we can also ask how many configurations exist where the sum of “coin tosses” are fixed.

A worked example: Ideal gas calculation of \Omega

  • N gas atoms at phase space points \mathbf{x}_i, \mathbf{p}_i
  • constrained to volume V
  • Energy fixed at E.

\begin{aligned}\Omega(N, V, E) = \int_V d\mathbf{x}_1 d\mathbf{x}_2 \cdots d\mathbf{x}_N \int d\mathbf{p}_1 d\mathbf{p}_2 \cdots d\mathbf{p}_N \delta \left(E - \frac{\mathbf{p}_1^2}{2m}- \frac{\mathbf{p}_2^2}{2m}\cdots- \frac{\mathbf{p}_N^2}{2m}\right)=\underbrace{V^N}_{\text{Real space volume, not N dimensional ``volume''}} \int d\mathbf{p}_1 d\mathbf{p}_2 \cdots d\mathbf{p}_N \delta \left(E - \frac{\mathbf{p}_1^2}{2m}- \frac{\mathbf{p}_2^2}{2m}\cdots- \frac{\mathbf{p}_N^2}{2m}\right)\end{aligned} \hspace{\stretch{1}}(1.10)

With \gamma defined implicitly by

\begin{aligned}\frac{d\gamma}{dE} = \Omega\end{aligned} \hspace{\stretch{1}}(1.3.11)

so that with Heavyside theta as in (Fig 2).

\begin{aligned}\Theta(x) = \left\{\begin{array}{l l}1 & \quad x \ge 0 \\ 0 & \quad x < 0\end{array}\right.\end{aligned} \hspace{\stretch{1}}(1.0.12a)

\begin{aligned}\frac{d\Theta}{dx} = \delta(x),\end{aligned} \hspace{\stretch{1}}(1.0.12b)

Fig 2: Heavyside theta


we have

\begin{aligned}\gamma(N, V, E) = V^N \int d\mathbf{p}_1 d\mathbf{p}_2 \cdots d\mathbf{p}_N \Theta \left(E - \sum_i \frac{\mathbf{p}_i^2}{2m}\right)\end{aligned} \hspace{\stretch{1}}(1.0.13)

In three dimensions (p_x, p_y, p_z), the dimension of momentum part of the phase space is 3. In general the dimension of the space is 3N. Here

\begin{aligned}\int d\mathbf{p}_1 d\mathbf{p}_2 \cdots d\mathbf{p}_N \Theta \left(E - \sum_i \frac{\mathbf{p}_i^2}{2m}\right),\end{aligned} \hspace{\stretch{1}}(1.0.14)

is the volume of a “sphere” in 3N– dimensions, which we found in the problem set to be

\begin{aligned}V_{m} = \frac{ \pi^{m/2} R^{m} }{ \Gamma\left( m/2 + 1 \right)}.\end{aligned} \hspace{\stretch{1}}(1.0.15a)

\begin{aligned}\Gamma(x) = \int_0^\infty dy e^{-y} y^{x-1}\end{aligned} \hspace{\stretch{1}}(1.0.15b)

\begin{aligned}\Gamma(x + 1) = x \Gamma(x) = x!\end{aligned} \hspace{\stretch{1}}(1.0.15c)

Since we have

\begin{aligned}\mathbf{p}_1^2 + \cdots \mathbf{p}_N^2 \le 2 m E\end{aligned} \hspace{\stretch{1}}(1.0.16)

the radius is

\begin{aligned}\text{radius} = \sqrt{ 2 m E}.\end{aligned} \hspace{\stretch{1}}(1.0.17)

This gives

\begin{aligned}\gamma(N, V, E) = V^N \frac{ \pi^{3 N/2} ( 2 m E)^{3 N/2}}{\Gamma( 3N/2 + 1) }= V^N \frac{2}{3N} \frac{ \pi^{3 N/2} ( 2 m E)^{3 N/2}}{\Gamma( 3N/2 ) },\end{aligned} \hspace{\stretch{1}}(1.0.17)


\begin{aligned}\Omega(N, V, E) = V^N \pi^{3 N/2} ( 2 m E)^{3 N/2 - 1} \frac{2 m}{\Gamma( 3N/2 ) }\end{aligned} \hspace{\stretch{1}}(1.0.19)

This result is almost correct, and we have to correct in 2 ways. We have to fix the counting since we need an assumption that all the particles are indistinguishable.

  • Indistinguishability. We must divide by N!.
  • \Omega is not dimensionless. We need to divide by h^{3N}, where h is Plank’s constant.

In the real world we have to consider this as a quantum mechanical system. Imagine a two dimensional phase space. The allowed points are illustrated in (Fig 3).

Fig 3: Phase space volume adjustment for the uncertainty principle


Since \Delta x \Delta p \sim \hbar, the question of how many boxes there are, we calculate the total volume, and then divide by the volume of each box. This sort of handwaving wouldn’t be required if we did a proper quantum mechanical treatment.

The corrected result is

\begin{aligned}\boxed{\Omega_{\mathrm{correct}} = \frac{V^N}{N!} \frac{1}{{h^{3N}}} \frac{( 2 \pi m E)^{3 N/2 }}{E} \frac{1}{\Gamma( 3N/2 ) }}\end{aligned} \hspace{\stretch{1}}(1.0.20)

To come

We’ll look at entropy

\begin{aligned}\underbrace{S}_{\text{Entropy}} = \underbrace{k_{\mathrm{B}}}_{\text{Boltzmann's constant}} \ln \underbrace{\Omega_{\mathrm{correct}}}_{\text{phase space volume (number of configurations)}}\end{aligned} \hspace{\stretch{1}}(1.0.21)

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , | Leave a Comment »

PHY452H1S Basic Statistical Mechanics. Lecture 4: Maxwell distribution for gases. Taught by Prof. Arun Paramekanti

Posted by peeterjoot on January 17, 2013

[Click here for a PDF of this post with nicer formatting]


Peeter’s lecture notes from class. May not be entirely coherent.

Review: Lead up to Maxwell distribution for gases

For the random walk, after a number of collisions N_{\mathrm{c}}, we found that a particle (labeled the ith) will have a velocity

\begin{aligned}\mathbf{v}_i(N_{\mathrm{c}}) = \mathbf{v}_i(0) + \sum_{l = 1}^{N_{\mathrm{c}}} \Delta \mathbf{v}_i(l)\end{aligned} \hspace{\stretch{1}}(1.1.1)

We argued that the probability distribution for finding a velocity \mathbf{v} was as in

\begin{aligned}\mathcal{P}_{N_{\mathrm{c}}}(\mathbf{v}_i) \propto \exp\left(-\frac{(\mathbf{v}_i - \mathbf{v}_i(0))^2}{ 2 N_{\mathrm{c}}}\right)\end{aligned} \hspace{\stretch{1}}(1.1.2)

Fig1: Velocity distribution found without considering kinetic energy conservation


What went wrong?

However, we know that this must be wrong, since we require

\begin{aligned}T = \frac{1}{{2}} \sum_{i = 1}^{n} \mathbf{v}_i^2 = \mbox{conserved}\end{aligned} \hspace{\stretch{1}}(1.2.3)

Where our argument went wrong is that when the particle has a greater than average velocity, the effect of a collision will be to slow it down. We have to account for

  1. Fluctuations \rightarrow “random walk”
  2. Dissipation \rightarrow “slowing down”

There were two ingredients to diffusion (the random walk), these were

  1. Conservation of particles

    \begin{aligned}\frac{\partial {c}}{\partial {t}} + \frac{\partial {j}}{\partial {x}} = 0\end{aligned} \hspace{\stretch{1}}(1.2.4)

    We can also think about a conservation of a particles in a velocity space

    \begin{aligned}\frac{\partial {c}}{\partial {t}}(v, t) + \frac{\partial {j_v}}{\partial {v}} = 0\end{aligned} \hspace{\stretch{1}}(1.2.5)

    where j_v is a probability current in this velocity space.

  2. Fick’s law in velocity space takes the form

    \begin{aligned}j_v = -D \frac{\partial {c}}{\partial {v}}(v, t)\end{aligned} \hspace{\stretch{1}}(1.2.6)

The diffusion results in an “attempt” to flatten the distribution of the concentration as in (Fig 2).

Fig2: A friction like term is require to oppose the diffusion pressure

We’d like to add to the diffusion current an extra frictional like term

\begin{aligned}j_v = -D \frac{\partial {c}}{\partial {v}}(v, t) - \eta v c(v)\end{aligned} \hspace{\stretch{1}}(1.2.7)

We want something directed opposite to the velocity and the concentration

\begin{aligned}\text{Diffusion current} \equiv -D \frac{\partial {c}}{\partial {v}}(v, t)\end{aligned} \hspace{\stretch{1}}(1.0.8a)

\begin{aligned}\text{Dissipation current} \equiv - \eta v c(v, t)\end{aligned} \hspace{\stretch{1}}(1.0.8b)

This gives

\begin{aligned}\frac{\partial {c}}{\partial {t}}(v, t) &= -\frac{\partial {j_v}}{\partial {v}} \\ &= -\frac{\partial {}}{\partial {v}} \left(-D \frac{\partial {c}}{\partial {v}}(v, t) - \eta v c(v)\right) \\ &= D \frac{\partial^2 {{c}}}{\partial {{v}}^2}(v, t) + \eta \frac{\partial {}}{\partial {v}}\left(v c(v, t)\right)\end{aligned} \hspace{\stretch{1}}(1.0.8b)

Can we find a steady state solution to this equation when t \rightarrow \infty? For such a steady state we have

\begin{aligned}0 = \frac{d^2 c}{dv^2} + \eta \frac{d}{dv} \left(v c\right)\end{aligned} \hspace{\stretch{1}}(1.0.10)

Integrating once we have

\begin{aligned}\frac{d c}{dv} = -\eta v c + \text{constant}\end{aligned} \hspace{\stretch{1}}(1.0.11)

supposing that dc/dv = 0 at v = 0, integrating once more we have

\begin{aligned}\boxed{c(v) \propto \exp\left(- \frac{\eta v^2}{2 D}\right).}\end{aligned} \hspace{\stretch{1}}(1.0.12)

This is the Maxwell-Boltzmann distribution, illustrated in (Fig3).

Fig3: Maxwell-Boltzmann distribution

The concentration c(v) has a probability distribution.

Calculating \left\langle{{v^2}}\right\rangle from this distribution we can identify the D/\eta factor.

\begin{aligned}\left\langle{{v^2}}\right\rangle &= \frac{\int v^2 e^{- \eta v^2/2D} dv}{\int e^{- \eta v^2/2D} dv} \\ &= \frac{\frac{D}{\eta} \int v \frac{d}{dv} \left( -e^{- \eta v^2/2D} \right) dv}{\int e^{- \eta v^2/2D} dv} \\ &= -\frac{D}{\eta} \frac{\int -e^{- \eta v^2/2D} dv}{\int e^{- \eta v^2/2D} dv} \\ &= \frac{D}{\eta}.\end{aligned} \hspace{\stretch{1}}(1.13)

This also happens to be the energy in terms of temperature (we can view this as a definition of the temperature for now), writing

\begin{aligned}\frac{1}{{2}} m \left\langle{{\mathbf{v}^2}}\right\rangle = \frac{1}{{2}} m \left( \frac{D}{\eta} \right) = \frac{1}{{2}} k_{\mathrm{B}} T\end{aligned} \hspace{\stretch{1}}(1.0.14)


\begin{aligned}k_{\mathrm{B}} = \mbox{Boltzmann constant}\end{aligned} \hspace{\stretch{1}}(1.0.15a)

\begin{aligned}T = \mbox{absolute temperature}\end{aligned} \hspace{\stretch{1}}(1.0.15b)

Equilibrium steady states

Fluctuations \leftrightarrow Dissipation

\begin{aligned}\boxed{\frac{D}{\eta} = \frac{ k_{\mathrm{B}} T}{m}}\end{aligned} \hspace{\stretch{1}}(1.0.16)

This is a specific example of the more general Fluctuation-Dissipation theorem.

Generalizing to 3D

Fick’s law and the continuity equation in 3D are respectively

\begin{aligned}\mathbf{j} = -D \boldsymbol{\nabla}_\mathbf{v} c(\mathbf{v}, t) - \eta \mathbf{v} c(\mathbf{v}, t)\end{aligned} \hspace{\stretch{1}}(1.0.17a)

\begin{aligned}\frac{\partial {}}{\partial {t}} c(\mathbf{v}, t) + \boldsymbol{\nabla}_\mathbf{v} \cdot \mathbf{j}(\mathbf{v}, t) = 0\end{aligned} \hspace{\stretch{1}}(1.0.17b)

As above we have for the steady state

\begin{aligned}0 &= \frac{\partial {}}{\partial {t}} c(\mathbf{v}, t) \\ &= \boldsymbol{\nabla}_\mathbf{v} \cdot \left( -D \boldsymbol{\nabla}_\mathbf{v} c(\mathbf{v}, t) - \eta \mathbf{v} c(\mathbf{v}, t)\right) \\ &= -D \boldsymbol{\nabla}^2_\mathbf{v} c - \eta \boldsymbol{\nabla}_\mathbf{v} \cdot (\mathbf{v} c)\end{aligned} \hspace{\stretch{1}}(1.0.17b)

Integrating once over all space

\begin{aligned}D \boldsymbol{\nabla}_\mathbf{v} c = -\eta \mathbf{v} c + \text{vector constant, assumed zero}\end{aligned} \hspace{\stretch{1}}(1.0.19)

This is three sets of equations, one for each component v_\alpha of \mathbf{v}

\begin{aligned}\frac{\partial {c}}{\partial {v_\alpha}} = -\frac{\eta}{D} v_\alpha c \end{aligned} \hspace{\stretch{1}}(1.0.20)

So that our steady state equation is

\begin{aligned}c(\mathbf{v}, t \rightarrow \infty) \propto \exp\left(-\frac{v_x^2 +v_y^2 +v_z^2 }{ 2 (D/\eta) }\right).\end{aligned} \hspace{\stretch{1}}(1.0.21)

Computing the average 3D squared velocity for this distribution, we have

\begin{aligned}\frac{1}{{2}} m \left\langle{{\mathbf{v} \cdot \mathbf{v}}}\right\rangle &= \frac{ \int dv_x dv_y dv_z \left( v_x^2 + v_y^2 + v_z^2 \right) \exp \left( -\frac{ v_x^2 + v_y^2 + v_z^2 }{ 2 (D/\eta) } \right)}{ \int dv_x dv_y dv_z \exp \left( -\frac{ v_x^2 + v_y^2 + v_z^2 }{ 2 (D/\eta) } \right)} \\ &= \frac{ \int dv_x v_x^2 \exp \left( -\frac{ v_x^2 }{ 2 (D/\eta) } \right)}{ \int dv_x \exp \left( -\frac{ v_x^2 }{ 2 (D/\eta) } \right)}+\frac{ \int dv_y v_y^2 \exp \left( -\frac{ v_y^2 }{ 2 (D/\eta) } \right)}{ \int dv_y \exp \left( -\frac{ v_y^2 }{ 2 (D/\eta) } \right)}+\frac{ \int dv_z v_z^2 \exp \left( -\frac{ v_z^2 }{ 2 (D/\eta) } \right)}{ \int dv_z \exp \left( -\frac{ v_z^2 }{ 2 (D/\eta) } \right)} \\ &= 3 \left( \frac{1}{2} k_{\mathrm{B}} T\right) \\ &= \frac{3}{2} k_{\mathrm{B}} T\end{aligned} \hspace{\stretch{1}}(1.0.21)

For each component v_\alpha the normalization kills off all the contributions for the other components, leaving us with the usual 3 k_{\mathrm{B}} T/2 ideal gas law kinetic energy.

Phase space

Now let’s switch directions a bit and look at how to examine a more general system described by the phase space of generalized coordinates

\begin{aligned}\{ x_{i_\alpha}(t),p_{i_\alpha}(t) \}\end{aligned} \hspace{\stretch{1}}(1.0.23)


  1. i = \mbox{molecule or particle number}
  2. \alpha \in \{x, y, z\}
  3. \mbox{Dimension} = N_{\text{particles}} \times 2 \times d, where d is the physical space dimension.

The motion in phase space will be governed by the knowledge of how each of these coordinates change for all the particles. Assuming a Hamiltonian H and recalling that H = \dot{x} p - \mathcal{L}, gives us

\begin{aligned}\frac{\partial {H}}{\partial {p}} = \dot{x}\end{aligned} \hspace{\stretch{1}}(1.0.24a)

\begin{aligned}\frac{\partial {H}}{\partial {x}} = -\frac{\partial {\mathcal{L}}}{\partial {x}} = - \frac{d{{p}}}{dt},\end{aligned} \hspace{\stretch{1}}(1.0.24b)

we have for the following set of equations describing the entire system

\begin{aligned}\frac{d{{}}}{dt} x_{i_\alpha}(t) = \frac{\partial {H}}{\partial {p_{i_\alpha}}}\end{aligned} \hspace{\stretch{1}}(1.0.25a)

\begin{aligned}\frac{d{{}}}{dt} p_{i_\alpha}(t)= -\frac{\partial {H}}{\partial {x_{i_\alpha}}}\end{aligned} \hspace{\stretch{1}}(1.0.25b)

Example, 1D SHO

\begin{aligned}H = \frac{1}{{2}} \frac{p^2}{m} + \frac{1}{{2}} k x^2\end{aligned} \hspace{\stretch{1}}(1.0.26)

This has phase space trajectories as in (Fig4).

Fig4: Classical SHO phase space trajectories

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , | Leave a Comment »