Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Posts Tagged ‘Gaussian’

Gaussian quadratic form integrals and multivariable approximation of exponential integrals

Posted by peeterjoot on January 26, 2014

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Motivation

In [1] eq. I.2.20 is the approximation

\begin{aligned}\int d\mathbf{q} e^{-f(\mathbf{q})/\hbar} \approx e^{-f(\mathbf{a})/\hbar} \sqrt{\frac{2 \pi \hbar}{\text{Det} f''(\mathbf{a})} },\end{aligned} \hspace{\stretch{1}}(1.1.1)

where [ f''(\mathbf{a}) ]_{ij} \equiv {\left.{{\partial^2 f/\partial q_i \partial q_j}}\right\vert}_{{\mathbf{q} = \mathbf{a}}}. Here \mathbf{a} is assumed to be an extremum of f. This follows from a generalization of the Gaussian integral result. Let’s derive both in detail.

Guts

First, to second order, let’s expand f(\mathbf{q}) around a min or max at \mathbf{q} = \mathbf{a}. The usual trick, presuming that one doesn’t remember the form of this generalized Taylor expansion, is to expand g(t) = f(\mathbf{a} + t \mathbf{q}) around t = 0, then evaluate at t = 1. We have

\begin{aligned}g'(t) &= \sum_i \frac{\partial {f(\mathbf{a} + t \mathbf{q})}}{\partial {(a_i + t q_i)}} \frac{d{{ (a_i + t q_i) }}}{dt} \\ &= \sum_i q_i \frac{\partial {f(\mathbf{a} + t \mathbf{q})}}{\partial {(a_i + t q_i)}} \\ &= \mathbf{q} \cdot \left(  {\left.{{\boldsymbol{\nabla}_\mathbf{q} f(\mathbf{q})}}\right\vert}_{{\mathbf{q} = \mathbf{a} + t \mathbf{q}}}  \right).\end{aligned} \hspace{\stretch{1}}(1.2.2)

The second derivative is

\begin{aligned}g''(t) = \sum_{i j} q_i q_j \frac{\partial {}}{\partial {(a_j + t q_j)}} \frac{\partial {f(\mathbf{a} + t \mathbf{q})}}{\partial {(a_i + t q_i)}},\end{aligned} \hspace{\stretch{1}}(1.2.3)

This gives

\begin{aligned}\begin{aligned}g'(0) &= \mathbf{q} \cdot \boldsymbol{\nabla}_\mathbf{q} f(\mathbf{q}) = \sum_i q_i \partial q_i f(\mathbf{q}) \\ g''(0) &= \left( \mathbf{q} \cdot \boldsymbol{\nabla}_\mathbf{q} \right)^2 f(\mathbf{q}) = \sum_{i j} q_i q_j \partial q_i \partial q_j f(\mathbf{q}).\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.2.4)

Putting these together, we have to second order in t is

\begin{aligned}f(\mathbf{a} + t \mathbf{q}) \approx f(\mathbf{a}) + \sum_i q_i \partial q_i f(\mathbf{q}) \frac{t^1}{1!}+ \sum_{i j} q_i q_j \partial q_i \partial q_j f(\mathbf{q}) \frac{t^2}{2!},\end{aligned} \hspace{\stretch{1}}(1.2.5)

or

\begin{aligned}f(\mathbf{a} + \mathbf{q}) \approx f(\mathbf{a}) + \sum_i q_i {\left.{{ \left(  \frac{\partial {f}}{\partial {q_i}} \right)}}\right\vert}_{\mathbf{a}}+ \frac{1}{{2}} \sum_{i j} q_i q_j {\left.{{\left(  \frac{\partial^2 f}{\partial q_i \partial q_j}  \right)}}\right\vert}_{\mathbf{a}}.\end{aligned} \hspace{\stretch{1}}(1.2.6)

We can put the terms up to second order in a nice tidy matrix forms

\begin{aligned}\mathbf{b} = {\left.{{\left(  \boldsymbol{\nabla}_\mathbf{q} f \right)}}\right\vert}_{\mathbf{a}}\end{aligned} \hspace{\stretch{1}}(1.0.7a)

\begin{aligned}A = {\begin{bmatrix}{\left.{{ \left(  \frac{\partial^2 f}{\partial q_i \partial q_j} \right)}}\right\vert}_{\mathbf{a}}\end{bmatrix}}_{i j}.\end{aligned} \hspace{\stretch{1}}(1.0.7b)

Note that eq. 1.0.7b is a real symmetric matrix, and can thus be reduced to diagonal form by an orthonormal transformation. Putting the pieces together, we have

\begin{aligned}f(\mathbf{a} + \mathbf{q}) \approx f(\mathbf{a}) + \mathbf{q}^\text{T} \mathbf{b} + \frac{1}{{2}} \mathbf{q}^\text{T} A \mathbf{q}.\end{aligned} \hspace{\stretch{1}}(1.0.8)

Integrating this, we have

\begin{aligned}\int dq_1 dq_2 \cdots dq_N \exp\left( -\left(  f(\mathbf{a}) + \mathbf{q}^\text{T} \mathbf{b} + \frac{1}{{2}} \mathbf{q}^\text{T} A \mathbf{q}  \right)\right)=e^{-f(\mathbf{a})}\int dq_1 dq_2 \cdots dq_N \exp\left(  -\mathbf{q}^\text{T} \mathbf{b} - \frac{1}{{2}} \mathbf{q}^\text{T} A \mathbf{q}  \right).\end{aligned} \hspace{\stretch{1}}(1.0.9)

Employing an orthonormal change of variables to diagonalizae the matrix

\begin{aligned}A = O^\text{T} D O,\end{aligned} \hspace{\stretch{1}}(1.0.10)

and \mathbf{r} = O \mathbf{q}, or r_i = O_{ik} q_k, the volume element after transformation is

\begin{aligned}dr_1 dr_2 \cdots dr_N &= \frac{\partial(r_1, r_2, \cdots, r_N)}{\partial(q_1, q_2, \cdots, q_N)}dq_1 dq_2 \cdots dq_N \\ &= \begin{vmatrix}O_{11} & O_{12} & \cdots & O_{1N} \\ O_{21} & O_{22} & \cdots & O_{2N} \\ \dot{v}s & \dot{v}s & \dot{v}s & \dot{v}s \\  O_{N1} & O_{N2} & \cdots & O_{NN} \\ \end{vmatrix}dq_1 dq_2 \cdots dq_N \\ &= (\text{Det} O)dq_1 dq_2 \cdots dq_N \\ &= dq_1 dq_2 \cdots dq_N \end{aligned} \hspace{\stretch{1}}(1.0.10)

Our integral is

\begin{aligned}e^{-f(\mathbf{a})}\int dq_1 dq_2 \cdots dq_N \exp\left(  -\mathbf{q}^\text{T} \mathbf{b} - \frac{1}{{2}} \mathbf{q}^\text{T} A \mathbf{q}  \right) &= e^{-f(\mathbf{a})}\int dr_1 dr_2 \cdots dr_N \exp\left(  -\mathbf{q}^\text{T} O^\text{T} O \mathbf{b} - \frac{1}{{2}} \mathbf{q}^\text{T} O^\text{T} D O \mathbf{q}  \right) \\ &= e^{-f(\mathbf{a})}\int dr_1 dr_2 \cdots dr_N \exp\left(  -\mathbf{r}^\text{T} (O \mathbf{b}) - \frac{1}{{2}} \mathbf{r}^\text{T} D \mathbf{r}  \right) \\ &= e^{-f(\mathbf{a})}\int dr_1 e^{ -\frac{1}{{2}} r_1^2 \lambda_1 - r_1 (O \mathbf{b})_1 }\int dr_2 e^{ -\frac{1}{{2}} r_2^2 \lambda_2 - r_2 (O \mathbf{b})_2 }\cdots \int dr_N e^{ -\frac{1}{{2}} r_N^2 \lambda_N - r_N (O \mathbf{b})_N }.\end{aligned} \hspace{\stretch{1}}(1.0.10)

We now have products of terms that are of the regular Gaussian form. One such integral is

\begin{aligned}\int e^{-a x^2/2 + J x} &= \int \exp\left(-\frac{1}{{2}} \left(\left(  \sqrt{a} x - J/\sqrt{a}  \right)^2- \left(  J/\sqrt{a}  \right)^2\right)\right) \\ &= e^{J^2/2a} \sqrt{2 \pi \int_0^\infty r dr e^{-a r^2/2}}\end{aligned} \hspace{\stretch{1}}(1.0.10)

This is just

\begin{aligned}\int e^{-a x^2/2 + J x}= e^{J^2/2a} \sqrt{ \frac{2 \pi}{a} }.\end{aligned} \hspace{\stretch{1}}(1.0.10)

Applying this to the integral of interest, writing m_i = (O \mathbf{b})_i

\begin{aligned}\begin{aligned}e^{-f(\mathbf{a})}\int &dq_1 dq_2 \cdots dq_N \exp\left(  -\mathbf{q}^\text{T} \mathbf{b} - \frac{1}{{2}} \mathbf{q}^\text{T} A \mathbf{q}  \right) \\ &=e^{-f(\mathbf{a})}e^{-m_1^2/2\lambda_1} \sqrt{ \frac{2 \pi}{\lambda_1}}e^{-m_2^2/2\lambda_2} \sqrt{ \frac{2 \pi}{\lambda_2}}\cdots e^{-m_N^2/2\lambda_N} \sqrt{ \frac{2 \pi}{\lambda_N}} \\ &=e^{-f(\mathbf{a})}\sqrt{\frac{2 \pi}{\text{Det} A}}\exp\left(-\frac{1}{{2}}\left(  -m_1^2/\lambda_1 -m_2^2/\lambda_2 \cdots -m_N^2/\lambda_N  \right) \right).\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.10)

This last exponential argument can be put into matrix form

\begin{aligned}-m_1^2/\lambda_1-m_2^2/\lambda_2\cdots -m_N^2/\lambda_N &= (O \mathbf{b})^\text{T} D^{-1} O \mathbf{b} \\ &= \mathbf{b}^\text{T} O^\text{T} D^{-1} O \mathbf{b} \\ &= \mathbf{b}^\text{T} A^{-1} \mathbf{b},\end{aligned} \hspace{\stretch{1}}(1.0.10)

Finally, referring back to eq. 1.0.7, we have

\begin{aligned}\int d\mathbf{q} e^{-f(\mathbf{q})} \approx e^{-f(\mathbf{a})}\sqrt{\frac{2 \pi}{\text{Det} A}}e^{-\mathbf{b}^\text{T} A^{-1} \mathbf{b}/2}.\end{aligned} \hspace{\stretch{1}}(1.0.10)

Observe that we can recover eq. 1.1.1 by noting that \mathbf{b} = 0 for that system was assumed (i.e. \mathbf{a} was an extremum point), and by noting that the determinant scales with 1/\hbar since it just contains the second partials.

An afterword on notational sugar:

We didn’t need it, but it seems worth noting that we can write the Taylor expansion of eq. 1.0.8 in operator form as

\begin{aligned}f(\mathbf{a} + \mathbf{q}) = \sum_{k = 0}^\infty \frac{1}{{k!}} {\left.{{ \left( \mathbf{q} \cdot \boldsymbol{\nabla}_{\mathbf{q}'} \right)^k f(\mathbf{q}') }}\right\vert}_{{\mathbf{q}' = \mathbf{a}}}={\left.{{ e^{\mathbf{q} \cdot \boldsymbol{\nabla}_{\mathbf{q}'}} f(\mathbf{q}') }}\right\vert}_{{\mathbf{q}' = \mathbf{a}}}.\end{aligned} \hspace{\stretch{1}}(1.0.18)

References

[1] A. Zee. Quantum field theory in a nutshell. Universities Press, 2005.

Posted in Math and Physics Learning. | Tagged: , , | Leave a Comment »

An updated compilation of notes, for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 3, 2013

In A compilation of notes, so far, for ‘PHY452H1S Basic Statistical Mechanics’ I posted a link this compilation of statistical mechanics course notes.

That compilation now all of the following too (no further updates will be made to any of these) :

February 28, 2013 Rotation of diatomic molecules

February 28, 2013 Helmholtz free energy

February 26, 2013 Statistical and thermodynamic connection

February 24, 2013 Ideal gas

February 16, 2013 One dimensional well problem from Pathria chapter II

February 15, 2013 1D pendulum problem in phase space

February 14, 2013 Continuing review of thermodynamics

February 13, 2013 Lightning review of thermodynamics

February 11, 2013 Cartesian to spherical change of variables in 3d phase space

February 10, 2013 n SHO particle phase space volume

February 10, 2013 Change of variables in 2d phase space

February 10, 2013 Some problems from Kittel chapter 3

February 07, 2013 Midterm review, thermodynamics

February 06, 2013 Limit of unfair coin distribution, the hard way

February 05, 2013 Ideal gas and SHO phase space volume calculations

February 03, 2013 One dimensional random walk

February 02, 2013 1D SHO phase space

February 02, 2013 Application of the central limit theorem to a product of random vars

January 31, 2013 Liouville’s theorem questions on density and current

January 30, 2013 State counting

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment »

PHY452H1S Basic Statistical Mechanics. Problem Set 2: Generating functions and diffusion

Posted by peeterjoot on January 26, 2013

[Click here for a PDF of this post with nicer formatting]

Disclaimer

This is an ungraded set of answers to the problems posed.

Question: Diffusion

The usual diffusion equation for the probability density in one dimension is given by

\begin{aligned}\frac{\partial {P}}{\partial {t}}(x, t) = D \frac{\partial^2 {{P}}}{\partial {{x}}^2}(x, t)\end{aligned} \hspace{\stretch{1}}(1.0.1)

where D is the diffusion constant. Define the Fourier components of the probability distribution via

\begin{aligned}P(x, t) = \int_{-\infty}^\infty \frac{dk}{2 \pi} \tilde{P}(k, t) \exp\left( i k x \right)\end{aligned} \hspace{\stretch{1}}(1.0.2a)

\begin{aligned}\tilde{P}(k, t) = \int_{-\infty}^\infty dx P(x, t) \exp\left( -i k x \right)\end{aligned} \hspace{\stretch{1}}(1.0.2b)

This is useful since the diffusion equation is linear in the probability and each Fourier component will evolve independently. Using this, solve the diffusion equation to obtain P(k,t) in Fourier space given the initial \tilde{P}(k,0).

Assuming an initial Gaussian profile

\begin{aligned}P(x, 0) = \frac{1}{{\sqrt{2 \pi \sigma^2}}} \exp\left(-\frac{x^2}{2 \sigma^2}\right),\end{aligned} \hspace{\stretch{1}}(1.3)

obtain the probability density P(x,t) at a later time t. (NB: Fourier transform, get the solution, transform back.) Schematically plot the profile at the initial time and a later time.

A small modulation on top of a uniform value

Let the probability density be proportional to

\begin{aligned}\frac{1}{{L}} + A \sin(k_0 x)\end{aligned} \hspace{\stretch{1}}(1.4)

at an initial time t = 0. Assume this is in a box of large size L, but ignore boundary effects except to note that it will help to normalize the constant piece, assuming the oscillating piece integrates to zero. Also note that we have
to assume A < 1/L to ensure that the probability density is positive. Obtain P(x,t) at a later time t. Roughly how long does the modulation take to decay away? Schematically plot the profile at the initial time and a later time.

Answer

Inserting the transform definitions we have

\begin{aligned}0 &= \left( \frac{\partial {}}{\partial {t}} - D \frac{\partial^2 {{}}}{\partial {{x}}^2} \right) P \\ &= \left( \frac{\partial {}}{\partial {t}} - D \frac{\partial^2 {{}}}{\partial {{x}}^2} \right) \int_{-\infty}^\infty \frac{dk}{2 \pi} \tilde{P}(k, t) \exp\left( i k x \right) \\ &=\int_{-\infty}^\infty \frac{dk}{2 \pi} \left(\frac{\partial {}}{\partial {t}} \tilde{P}(k, t) + k^2 D\tilde{P}(k, t) \right)\exp\left( i k x \right),\end{aligned} \hspace{\stretch{1}}(1.0.5)

We conclude that

\begin{aligned}0 = \tilde{P}(k, t) + k^2 D\tilde{P}(k, t),\end{aligned} \hspace{\stretch{1}}(1.0.6)

or

\begin{aligned}\tilde{P}(k, t) = A(k) e^{-k^2 D t}.\end{aligned} \hspace{\stretch{1}}(1.0.7)

If the Fourier transform of the distribution is constant until time t, so that \tilde{P}(k, t < 0) = \tilde{P}(k, 0), we can write

\begin{aligned}\boxed{\tilde{P}(k, t) = \tilde{P}(k, 0) e^{-k^2 D t}.}\end{aligned} \hspace{\stretch{1}}(1.0.8)

The time evolution of the distributions transform just requires multiplication by the decreasing exponential factor e^{-k^2 D t}.

Propagator for the diffusion equation

We can also use this to express the explicit time evolution of the distribution

\begin{aligned}P(x, t) &= \int_{-\infty}^\infty \frac{dk}{2 \pi} \tilde{P}(k, 0) e^{-k^2 D t}\exp\left( i k x \right) \\ &= \int_{-\infty}^\infty \frac{dk}{2 \pi} \int_{-\infty}^\infty dx' P(x', 0) \exp\left( -i k x' \right)e^{-k^2 D t}\exp\left( i k x \right) \\ &= \int_{-\infty}^\infty dx' P(x', 0) \int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left( -k^2 D t + i k (x - x') \right)\end{aligned} \hspace{\stretch{1}}(1.0.9)

Our distribution time evolution is given by convolve with a propagator function

\begin{aligned}P(x, t) = \int dx' P(x', 0) G(x', x)\end{aligned} \hspace{\stretch{1}}(1.0.10a)

\begin{aligned}G(x', x) =\int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left( -k^2 D t + i k (x - x') \right)\end{aligned} \hspace{\stretch{1}}(1.0.10b)

For t \ge 0 we can complete the square, finding that this propagator is

\begin{aligned}G(x', x) &= \int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left( -k^2 D t + i k (x - x') \right) \\ &= \exp\left( \left(\frac{i (x - x')}{2 \sqrt{D t}} \right)^2 \right)\int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left( - \left(k \sqrt{D t} + \frac{i (x - x')}{2 \sqrt{D t}} \right)^2 \right)\end{aligned} \hspace{\stretch{1}}(1.0.11)

or

\begin{aligned}\boxed{G(x', x) =\frac{1}{\sqrt{4 \pi D t}} \exp\left(-\frac{(x - x')^2}{4 D t}\right).}\end{aligned} \hspace{\stretch{1}}(1.0.12)

A schematic plot of this function as a function of t for fixed x - x' is plotted in (Fig1).

Fig1: Form of the propagator function for the diffusion equation

Gaussian

For the Gaussian of 1.3 we compute the initial time Fourier transform

\begin{aligned}\tilde{P}(k) &= \int_{-\infty}^\infty dx \frac{1}{{\sqrt{2 \pi \sigma^2}}} \exp\left(-\frac{x^2}{2 \sigma^2}-i k x \right) \\ &= \frac{1}{{\sqrt{2 \pi \sigma^2}}} \exp\left(-\left( \frac{\sqrt{ 2 \sigma^2}}{2} k i\right)^2\right)\int_{-\infty}^\infty dx \exp\left(-\left( \frac{x}{\sqrt{2 \sigma^2} } + \frac{\sqrt{ 2 \sigma^2}}{2} k i\right)^2\right) \\ &= \exp\left(-\frac{\sigma^2 k^2}{2}\right).\end{aligned} \hspace{\stretch{1}}(1.0.13)

The time evolution of the generating function is

\begin{aligned}\tilde{P}(k, t) = \exp\left(-\frac{\sigma^2 k^2}{2} - D k^2 t\right),\end{aligned} \hspace{\stretch{1}}(1.0.14)

and we can find our time evolved probability density by inverse transforming

\begin{aligned}P(x, t) &= \int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left(-\frac{\sigma^2 k^2}{2} - D k^2 t + i k x\right) \\ &= \exp\left(i \frac{x}{ 2 \sqrt{\frac{\sigma^2}{2} + D t} }\right)^2\int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left(-\left(k \sqrt{\frac{\sigma^2}{2} + D t} + i \frac{x}{ 2 \sqrt{\frac{\sigma^2}{2} + D t} }\right)^2\right)\end{aligned} \hspace{\stretch{1}}(1.0.15)

For t \ge 0 this is

\begin{aligned}\boxed{P(x, t) =\frac{1}{{\sqrt{2 \pi \left( \sigma^2 + 2 D t \right) } }}\exp\left(-\frac{x^2}{2 \left( \sigma^2 + 2 D t \right) }\right).}\end{aligned} \hspace{\stretch{1}}(1.0.16)

As a check, we see that this reproduces the t = 0 value as expected. A further check using Mathematica applying the propagator 1.0.12, also finds the same result as this manual calculation.

This is plotted for D = \sigma = 1 in (Fig2) for a couple different times t \ge 0.

Fig2: Gaussian probability density time evolution with diffusion

Boxed constant with small oscillation

The normalization of the distribution depends on the interval boundaries. With the box range given by x \in [a, a + L] we have

\begin{aligned}\int_a^{a + L} dx \left( \frac{1}{{L}} + A \sin( k_0 x) \right) dx=1 - \frac{A}{k_0} \left( \cos( k_0(a + L) ) - \cos( k_0 a ) \right)\end{aligned} \hspace{\stretch{1}}(1.0.17)

With an even range for box x \in [-L/2, L/2] this is unity.

To find the distribution at a later point in time we can utilize the propagator

\begin{aligned}P(x, t) = \int_{-L/2}^{L/2} dx' \frac{1}{{2 \sqrt{\pi D t} }} \left( \frac{1}{{L}} + A \sin( k_0 x' ) \right) \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right)\end{aligned} \hspace{\stretch{1}}(1.0.18)

Let’s write this as

\begin{aligned}P(x, t) = P_{\mathrm{rect}}(x, t) + P_{\mathrm{sin}}(x, t)\end{aligned} \hspace{\stretch{1}}(1.0.19a)

\begin{aligned}P_{\mathrm{rect}}(x, t) =\frac{1}{{2 L \sqrt{\pi D t} }} \int_{-L/2}^{L/2} dx' \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right)\end{aligned} \hspace{\stretch{1}}(1.0.19b)

\begin{aligned}P_{\mathrm{sin}}(x, t)=\frac{A}{2 \sqrt{\pi D t} } \int_{-L/2}^{L/2} dx' \sin( k_0 x' ) \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right)\end{aligned} \hspace{\stretch{1}}(1.0.19c)

Applying a u = (x' - x)/\sqrt{4 D t} change of variables for the first term, we can reduce it to a difference of error functions

\begin{aligned}P_{\mathrm{rect}}(x, t) &= \frac{1}{{L}} \int_{-L/2}^{L/2} dx' \frac{1}{{2 \sqrt{\pi D t} }} \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right) \\ &= \frac{1}{{L \sqrt{\pi}}}\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{-u^2} \\ &= \frac{1}{{2 L}} \left( \text{erf}\left( \frac{L/2 -x}{2 \sqrt{D t}} \right)-\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} \right)\right)\end{aligned} \hspace{\stretch{1}}(1.0.20)

Following Mathematica, lets introduce a two argument error function for the difference between two points

\begin{aligned}\text{erf}(z_0, z_1) \equiv \text{erf}(z_1) - \text{erf}(z_0).\end{aligned} \hspace{\stretch{1}}(1.0.21)

Using that our rectangular function’s time evolution can be written

\begin{aligned}P_{\mathrm{rect}}(x, t)=\frac{1}{{2 L}} \text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} ,\frac{L/2 -x}{2 \sqrt{D t}} \right)\end{aligned} \hspace{\stretch{1}}(1.0.22)

For L = D = 1, and t = 10^{-8}, this is plotted in (Fig3). Somewhat surprisingly, this difference of error functions does appear to result in a rectangular function for small t.

Fig3: Rectangular part of the probability distribution for very small t

The time evolution of this non-oscillation part of the probability distribution is plotted as a function of both t and x in (Fig4).

Fig4: Time evolution of the rectangular part of the probability distribution

For the sine piece we can also find a solution in terms of (complex) error functions

\begin{aligned}P_{\mathrm{sin}}(x, t) &= A \int_{-L/2}^{L/2} dx' \frac{1}{{2 \sqrt{\pi D t} }} \sin( k_0 x' ) \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right) \\ &= \frac{A}{\sqrt{\pi}}\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du \sin( k_0 ( x + 2 u \sqrt{D t} ) ) e^{-u^2} \\ &= \frac{A}{2 i \sqrt{\pi}}\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du \left( e^{ i k_0 ( x + 2 u \sqrt{D t} ) } -e^{ -i k_0 ( x + 2 u \sqrt{D t} ) }\right)e^{-u^2} \\ &= \frac{A}{2 i \sqrt{\pi}}\left( e^{ i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{ -u^2 + 2 i k_0 u \sqrt{D t} } -e^{ -i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{ -u^2 -2 i k_0 u \sqrt{D t} }\right) \\ &= \frac{A}{2 i \sqrt{\pi}}e^{ -k_0^2 D t } \left( e^{ i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{ -(u - i k_0 \sqrt{D t})^2 } -e^{ -i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{ -(u + i k_0 \sqrt{D t})^2 }\right) \\ &= \frac{A}{2 i \sqrt{\pi}}e^{ -k_0^2 D t } \left( e^{ i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}} - i k_0 \sqrt{D t}}^{\frac{L/2 - x}{2 \sqrt{Dt}} - i k_0 \sqrt{D t}} dv e^{ -v^2 }-e^{ -i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}} + i k_0 \sqrt{D t}}^{\frac{L/2 - x}{2 \sqrt{Dt}} + i k_0 \sqrt{D t}} dv e^{ -v^2 }\right) \\ &= \frac{A}{4 i }e^{ -k_0^2 D t } \left( e^{ i k_0 x }\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} - i k_0 \sqrt{D t}, \frac{L/2 - x}{2 \sqrt{Dt}} - i k_0 \sqrt{D t} \right)-e^{ -i k_0 x }\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} + i k_0 \sqrt{D t}, \frac{L/2 - x}{2 \sqrt{Dt}} + i k_0 \sqrt{D t}\right)\right)\end{aligned} \hspace{\stretch{1}}(1.0.23)

This is plotted for A = D = L = 1, k_0 = 8 \pi, and t = 10^{-8} in (Fig5).

Fig5: Verification at t -> 0 that the diffusion result is sine like

The diffusion of this, again for A = D = L = 1, k_0 = 8 \pi, and t \in [10^{-5}, 0.01] is plotted in (Fig6). Again we see that we have the expected sine for small t.

Fig6: Diffusion of the oscillatory term

Putting both the rectangular and the windowed sine portions of the probability distribution together, we have the diffusion result for the entire distribution

\begin{aligned}\boxed{\begin{aligned}P(x, t)&=\frac{1}{{2 L}} \text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} ,\frac{L/2 -x}{2 \sqrt{D t}} \right) \\ &+\frac{A}{4 i }e^{ -k_0^2 D t + i k_0 x }\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} - i k_0 \sqrt{D t}, \frac{L/2 - x}{2 \sqrt{Dt}} - i k_0 \sqrt{D t} \right) \\ &-\frac{A}{4 i }e^{ -k_0^2 D t -i k_0 x }\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} + i k_0 \sqrt{D t}, \frac{L/2 - x}{2 \sqrt{Dt}} + i k_0 \sqrt{D t}\right)\end{aligned}}\end{aligned} \hspace{\stretch{1}}(1.0.24)

It is certainly ugly looking! We see that the oscillation die off is dependent on the \exp( -k_0^2 D t) term. In time

\begin{aligned}t = \frac{1}{{k_0^2 D}},\end{aligned} \hspace{\stretch{1}}(1.0.25)

that oscillation dies away to 1/e of its initial amplitude. This dispersion is plotted at times t = 10^{-5} and t = 1/(k_0^2 D) for L = D = 1, k_0 = 8 \pi and A = 1/2 in (Fig7).

Fig7: Initial time distribution and dispersion of the oscillatory portion to 1/e of initial amplitude

Similar to the individual plots of P_{\mathrm{rect}}(x, t) and P_{\mathrm{sin}}(x, t) above, we plot the time evolution of the total probability dispersion P(x, t) in (Fig8). We see in the plots above that the rectangular portion of this distribution will also continue to flatten over time after most of the oscillation has also died off.

Fig8: Diffusion of uniform but oscillating probability distribution

An easier solution for the sinusoidal part

After working this problem, talking with classmates about how they solved it (because I was sure I’d done this windowed oscillating distribution the hard way), I now understand what was meant by “ignore boundary effects”. That is, ignore the boundary effects in the sinusoid portion of the distribution. I didn’t see how we could ignore the boundary effects because doing so would make the sine Fourier transform non-convergent. Ignoring pesky ideas like convergence we can “approximate” the Fourier transform of the windowed sine as

\begin{aligned}\tilde{P}_{\mathrm{sin}}(k) &\approx A \int_{-\infty}^\infty \sin (k_0 x) e^{-i k x} dx \\ &= \frac{A }{2 i} \int_{-\infty}^\infty \left(e^{i (k_0 - k) x} -e^{-i (k_0 + k) x} \right)dx \\ &= \frac{A \pi}{i} \left( \delta(k - k_0) - \delta(k + k_0)\right)\end{aligned} \hspace{\stretch{1}}(1.0.26)

Now we can inverse Fourier transform the diffusion result with ease since we’ve got delta functions. That is

\begin{aligned}P_{\mathrm{sin}}(x, t) &\approx \frac{1}{{2 \pi}} \frac{A \pi}{i} \int\left( \delta(k - k_0) - \delta(k + k_0)\right)e^{-D k^2 t} e^{i k x} dk \\ &= e^{-D k_0^2 t} \frac{ e^{i k_0 x} - e^{-i k_0 x}}{2 i} \\ &= e^{-D k_0^2 t} \sin( k_0 x )\end{aligned} \hspace{\stretch{1}}(1.0.27)

Question: Generating function

The Fourier transform of the probability distribution defined above \tilde{P}(k) is called the “generating function” of the distribution. Show that the n-th derivative of this generating function \partial^n P(k)/\partial k^n at the origin k = 0 is related to the n-th moment of the distribution function defined via \left\langle{{x^n}}\right\rangle = \int dx P(x) x^n. We will later see that the “partition function” in statistical mechanics is closely related to this concept of a generating function, and derivatives of this partition function can be related to thermodynamic averages of various observables.

Answer

\begin{aligned}{\left.{{ \frac{\partial^n}{\partial k^n} \tilde{P}(k) }}\right\vert}_{{k = 0}} &= {\left.{{ \frac{\partial^n}{\partial k^n} \left( \int_{-\infty}^\infty dx P(x) \exp\left( -i k x \right)\right)}}\right\vert}_{{k = 0}} \\ &= {\left.{{ \left(\int_{-\infty}^\infty dx P(x) (-i x)^n\exp\left( -i k x \right)\right)}}\right\vert}_{{k = 0}} \\ &= (-i)^n \int_{-\infty}^\infty dx P(x) x^n \\ &= (-i)^n \left\langle{{x^n}}\right\rangle\end{aligned} \hspace{\stretch{1}}(1.0.26)

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | Leave a Comment »