# Peeter Joot's Blog.

• ## Archives

 ivor on Just Energy Canada nasty busin… A final pre-exam upd… on An updated compilation of note… Anon on About peeterjoot on About Anon on About
• ## People not reading this blog: 6,973,738,433 minus:

• 132,931 hits

# Posts Tagged ‘propagator’

## An updated compilation of notes, for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 3, 2013

That compilation now all of the following too (no further updates will be made to any of these) :

February 28, 2013 Rotation of diatomic molecules

February 28, 2013 Helmholtz free energy

February 26, 2013 Statistical and thermodynamic connection

February 24, 2013 Ideal gas

February 16, 2013 One dimensional well problem from Pathria chapter II

February 15, 2013 1D pendulum problem in phase space

February 14, 2013 Continuing review of thermodynamics

February 13, 2013 Lightning review of thermodynamics

February 11, 2013 Cartesian to spherical change of variables in 3d phase space

February 10, 2013 n SHO particle phase space volume

February 10, 2013 Change of variables in 2d phase space

February 10, 2013 Some problems from Kittel chapter 3

February 07, 2013 Midterm review, thermodynamics

February 06, 2013 Limit of unfair coin distribution, the hard way

February 05, 2013 Ideal gas and SHO phase space volume calculations

February 03, 2013 One dimensional random walk

February 02, 2013 1D SHO phase space

February 02, 2013 Application of the central limit theorem to a product of random vars

January 31, 2013 Liouville’s theorem questions on density and current

January 30, 2013 State counting

## PHY452H1S Basic Statistical Mechanics. Problem Set 2: Generating functions and diffusion

Posted by peeterjoot on January 26, 2013

# Disclaimer

## Question: Diffusion

The usual diffusion equation for the probability density in one dimension is given by

\begin{aligned}\frac{\partial {P}}{\partial {t}}(x, t) = D \frac{\partial^2 {{P}}}{\partial {{x}}^2}(x, t)\end{aligned} \hspace{\stretch{1}}(1.0.1)

where $D$ is the diffusion constant. Define the Fourier components of the probability distribution via

\begin{aligned}P(x, t) = \int_{-\infty}^\infty \frac{dk}{2 \pi} \tilde{P}(k, t) \exp\left( i k x \right)\end{aligned} \hspace{\stretch{1}}(1.0.2a)

\begin{aligned}\tilde{P}(k, t) = \int_{-\infty}^\infty dx P(x, t) \exp\left( -i k x \right)\end{aligned} \hspace{\stretch{1}}(1.0.2b)

This is useful since the diffusion equation is linear in the probability and each Fourier component will evolve independently. Using this, solve the diffusion equation to obtain $P(k,t)$ in Fourier space given the initial $\tilde{P}(k,0)$.

### Assuming an initial Gaussian profile

\begin{aligned}P(x, 0) = \frac{1}{{\sqrt{2 \pi \sigma^2}}} \exp\left(-\frac{x^2}{2 \sigma^2}\right),\end{aligned} \hspace{\stretch{1}}(1.3)

obtain the probability density $P(x,t)$ at a later time $t$. (NB: Fourier transform, get the solution, transform back.) Schematically plot the profile at the initial time and a later time.

### A small modulation on top of a uniform value

Let the probability density be proportional to

\begin{aligned}\frac{1}{{L}} + A \sin(k_0 x)\end{aligned} \hspace{\stretch{1}}(1.4)

at an initial time $t = 0$. Assume this is in a box of large size $L$, but ignore boundary effects except to note that it will help to normalize the constant piece, assuming the oscillating piece integrates to zero. Also note that we have
to assume $A < 1/L$ to ensure that the probability density is positive. Obtain $P(x,t)$ at a later time $t$. Roughly how long does the modulation take to decay away? Schematically plot the profile at the initial time and a later time.

Inserting the transform definitions we have

\begin{aligned}0 &= \left( \frac{\partial {}}{\partial {t}} - D \frac{\partial^2 {{}}}{\partial {{x}}^2} \right) P \\ &= \left( \frac{\partial {}}{\partial {t}} - D \frac{\partial^2 {{}}}{\partial {{x}}^2} \right) \int_{-\infty}^\infty \frac{dk}{2 \pi} \tilde{P}(k, t) \exp\left( i k x \right) \\ &=\int_{-\infty}^\infty \frac{dk}{2 \pi} \left(\frac{\partial {}}{\partial {t}} \tilde{P}(k, t) + k^2 D\tilde{P}(k, t) \right)\exp\left( i k x \right),\end{aligned} \hspace{\stretch{1}}(1.0.5)

We conclude that

\begin{aligned}0 = \tilde{P}(k, t) + k^2 D\tilde{P}(k, t),\end{aligned} \hspace{\stretch{1}}(1.0.6)

or

\begin{aligned}\tilde{P}(k, t) = A(k) e^{-k^2 D t}.\end{aligned} \hspace{\stretch{1}}(1.0.7)

If the Fourier transform of the distribution is constant until time $t$, so that $\tilde{P}(k, t < 0) = \tilde{P}(k, 0)$, we can write

\begin{aligned}\boxed{\tilde{P}(k, t) = \tilde{P}(k, 0) e^{-k^2 D t}.}\end{aligned} \hspace{\stretch{1}}(1.0.8)

The time evolution of the distributions transform just requires multiplication by the decreasing exponential factor $e^{-k^2 D t}$.

Propagator for the diffusion equation

We can also use this to express the explicit time evolution of the distribution

\begin{aligned}P(x, t) &= \int_{-\infty}^\infty \frac{dk}{2 \pi} \tilde{P}(k, 0) e^{-k^2 D t}\exp\left( i k x \right) \\ &= \int_{-\infty}^\infty \frac{dk}{2 \pi} \int_{-\infty}^\infty dx' P(x', 0) \exp\left( -i k x' \right)e^{-k^2 D t}\exp\left( i k x \right) \\ &= \int_{-\infty}^\infty dx' P(x', 0) \int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left( -k^2 D t + i k (x - x') \right)\end{aligned} \hspace{\stretch{1}}(1.0.9)

Our distribution time evolution is given by convolve with a propagator function

\begin{aligned}P(x, t) = \int dx' P(x', 0) G(x', x)\end{aligned} \hspace{\stretch{1}}(1.0.10a)

\begin{aligned}G(x', x) =\int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left( -k^2 D t + i k (x - x') \right)\end{aligned} \hspace{\stretch{1}}(1.0.10b)

For $t \ge 0$ we can complete the square, finding that this propagator is

\begin{aligned}G(x', x) &= \int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left( -k^2 D t + i k (x - x') \right) \\ &= \exp\left( \left(\frac{i (x - x')}{2 \sqrt{D t}} \right)^2 \right)\int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left( - \left(k \sqrt{D t} + \frac{i (x - x')}{2 \sqrt{D t}} \right)^2 \right)\end{aligned} \hspace{\stretch{1}}(1.0.11)

or

\begin{aligned}\boxed{G(x', x) =\frac{1}{\sqrt{4 \pi D t}} \exp\left(-\frac{(x - x')^2}{4 D t}\right).}\end{aligned} \hspace{\stretch{1}}(1.0.12)

A schematic plot of this function as a function of $t$ for fixed $x - x'$ is plotted in (Fig1).

Fig1: Form of the propagator function for the diffusion equation

### Gaussian

For the Gaussian of 1.3 we compute the initial time Fourier transform

\begin{aligned}\tilde{P}(k) &= \int_{-\infty}^\infty dx \frac{1}{{\sqrt{2 \pi \sigma^2}}} \exp\left(-\frac{x^2}{2 \sigma^2}-i k x \right) \\ &= \frac{1}{{\sqrt{2 \pi \sigma^2}}} \exp\left(-\left( \frac{\sqrt{ 2 \sigma^2}}{2} k i\right)^2\right)\int_{-\infty}^\infty dx \exp\left(-\left( \frac{x}{\sqrt{2 \sigma^2} } + \frac{\sqrt{ 2 \sigma^2}}{2} k i\right)^2\right) \\ &= \exp\left(-\frac{\sigma^2 k^2}{2}\right).\end{aligned} \hspace{\stretch{1}}(1.0.13)

The time evolution of the generating function is

\begin{aligned}\tilde{P}(k, t) = \exp\left(-\frac{\sigma^2 k^2}{2} - D k^2 t\right),\end{aligned} \hspace{\stretch{1}}(1.0.14)

and we can find our time evolved probability density by inverse transforming

\begin{aligned}P(x, t) &= \int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left(-\frac{\sigma^2 k^2}{2} - D k^2 t + i k x\right) \\ &= \exp\left(i \frac{x}{ 2 \sqrt{\frac{\sigma^2}{2} + D t} }\right)^2\int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left(-\left(k \sqrt{\frac{\sigma^2}{2} + D t} + i \frac{x}{ 2 \sqrt{\frac{\sigma^2}{2} + D t} }\right)^2\right)\end{aligned} \hspace{\stretch{1}}(1.0.15)

For $t \ge 0$ this is

\begin{aligned}\boxed{P(x, t) =\frac{1}{{\sqrt{2 \pi \left( \sigma^2 + 2 D t \right) } }}\exp\left(-\frac{x^2}{2 \left( \sigma^2 + 2 D t \right) }\right).}\end{aligned} \hspace{\stretch{1}}(1.0.16)

As a check, we see that this reproduces the $t = 0$ value as expected. A further check using Mathematica applying the propagator 1.0.12, also finds the same result as this manual calculation.

This is plotted for $D = \sigma = 1$ in (Fig2) for a couple different times $t \ge 0$.

Fig2: Gaussian probability density time evolution with diffusion

### Boxed constant with small oscillation

The normalization of the distribution depends on the interval boundaries. With the box range given by $x \in [a, a + L]$ we have

\begin{aligned}\int_a^{a + L} dx \left( \frac{1}{{L}} + A \sin( k_0 x) \right) dx=1 - \frac{A}{k_0} \left( \cos( k_0(a + L) ) - \cos( k_0 a ) \right)\end{aligned} \hspace{\stretch{1}}(1.0.17)

With an even range for box $x \in [-L/2, L/2]$ this is unity.

To find the distribution at a later point in time we can utilize the propagator

\begin{aligned}P(x, t) = \int_{-L/2}^{L/2} dx' \frac{1}{{2 \sqrt{\pi D t} }} \left( \frac{1}{{L}} + A \sin( k_0 x' ) \right) \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right)\end{aligned} \hspace{\stretch{1}}(1.0.18)

Let’s write this as

\begin{aligned}P(x, t) = P_{\mathrm{rect}}(x, t) + P_{\mathrm{sin}}(x, t)\end{aligned} \hspace{\stretch{1}}(1.0.19a)

\begin{aligned}P_{\mathrm{rect}}(x, t) =\frac{1}{{2 L \sqrt{\pi D t} }} \int_{-L/2}^{L/2} dx' \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right)\end{aligned} \hspace{\stretch{1}}(1.0.19b)

\begin{aligned}P_{\mathrm{sin}}(x, t)=\frac{A}{2 \sqrt{\pi D t} } \int_{-L/2}^{L/2} dx' \sin( k_0 x' ) \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right)\end{aligned} \hspace{\stretch{1}}(1.0.19c)

Applying a $u = (x' - x)/\sqrt{4 D t}$ change of variables for the first term, we can reduce it to a difference of error functions

\begin{aligned}P_{\mathrm{rect}}(x, t) &= \frac{1}{{L}} \int_{-L/2}^{L/2} dx' \frac{1}{{2 \sqrt{\pi D t} }} \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right) \\ &= \frac{1}{{L \sqrt{\pi}}}\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{-u^2} \\ &= \frac{1}{{2 L}} \left( \text{erf}\left( \frac{L/2 -x}{2 \sqrt{D t}} \right)-\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} \right)\right)\end{aligned} \hspace{\stretch{1}}(1.0.20)

Following Mathematica, lets introduce a two argument error function for the difference between two points

\begin{aligned}\text{erf}(z_0, z_1) \equiv \text{erf}(z_1) - \text{erf}(z_0).\end{aligned} \hspace{\stretch{1}}(1.0.21)

Using that our rectangular function’s time evolution can be written

\begin{aligned}P_{\mathrm{rect}}(x, t)=\frac{1}{{2 L}} \text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} ,\frac{L/2 -x}{2 \sqrt{D t}} \right)\end{aligned} \hspace{\stretch{1}}(1.0.22)

For $L = D = 1$, and $t = 10^{-8}$, this is plotted in (Fig3). Somewhat surprisingly, this difference of error functions does appear to result in a rectangular function for small $t$.

Fig3: Rectangular part of the probability distribution for very small t

The time evolution of this non-oscillation part of the probability distribution is plotted as a function of both $t$ and $x$ in (Fig4).

Fig4: Time evolution of the rectangular part of the probability distribution

For the sine piece we can also find a solution in terms of (complex) error functions

\begin{aligned}P_{\mathrm{sin}}(x, t) &= A \int_{-L/2}^{L/2} dx' \frac{1}{{2 \sqrt{\pi D t} }} \sin( k_0 x' ) \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right) \\ &= \frac{A}{\sqrt{\pi}}\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du \sin( k_0 ( x + 2 u \sqrt{D t} ) ) e^{-u^2} \\ &= \frac{A}{2 i \sqrt{\pi}}\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du \left( e^{ i k_0 ( x + 2 u \sqrt{D t} ) } -e^{ -i k_0 ( x + 2 u \sqrt{D t} ) }\right)e^{-u^2} \\ &= \frac{A}{2 i \sqrt{\pi}}\left( e^{ i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{ -u^2 + 2 i k_0 u \sqrt{D t} } -e^{ -i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{ -u^2 -2 i k_0 u \sqrt{D t} }\right) \\ &= \frac{A}{2 i \sqrt{\pi}}e^{ -k_0^2 D t } \left( e^{ i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{ -(u - i k_0 \sqrt{D t})^2 } -e^{ -i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{ -(u + i k_0 \sqrt{D t})^2 }\right) \\ &= \frac{A}{2 i \sqrt{\pi}}e^{ -k_0^2 D t } \left( e^{ i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}} - i k_0 \sqrt{D t}}^{\frac{L/2 - x}{2 \sqrt{Dt}} - i k_0 \sqrt{D t}} dv e^{ -v^2 }-e^{ -i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}} + i k_0 \sqrt{D t}}^{\frac{L/2 - x}{2 \sqrt{Dt}} + i k_0 \sqrt{D t}} dv e^{ -v^2 }\right) \\ &= \frac{A}{4 i }e^{ -k_0^2 D t } \left( e^{ i k_0 x }\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} - i k_0 \sqrt{D t}, \frac{L/2 - x}{2 \sqrt{Dt}} - i k_0 \sqrt{D t} \right)-e^{ -i k_0 x }\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} + i k_0 \sqrt{D t}, \frac{L/2 - x}{2 \sqrt{Dt}} + i k_0 \sqrt{D t}\right)\right)\end{aligned} \hspace{\stretch{1}}(1.0.23)

This is plotted for $A = D = L = 1$, $k_0 = 8 \pi$, and $t = 10^{-8}$ in (Fig5).

Fig5: Verification at t -> 0 that the diffusion result is sine like

The diffusion of this, again for $A = D = L = 1$, $k_0 = 8 \pi$, and $t \in [10^{-5}, 0.01]$ is plotted in (Fig6). Again we see that we have the expected sine for small $t$.

Fig6: Diffusion of the oscillatory term

Putting both the rectangular and the windowed sine portions of the probability distribution together, we have the diffusion result for the entire distribution

\begin{aligned}\boxed{\begin{aligned}P(x, t)&=\frac{1}{{2 L}} \text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} ,\frac{L/2 -x}{2 \sqrt{D t}} \right) \\ &+\frac{A}{4 i }e^{ -k_0^2 D t + i k_0 x }\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} - i k_0 \sqrt{D t}, \frac{L/2 - x}{2 \sqrt{Dt}} - i k_0 \sqrt{D t} \right) \\ &-\frac{A}{4 i }e^{ -k_0^2 D t -i k_0 x }\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} + i k_0 \sqrt{D t}, \frac{L/2 - x}{2 \sqrt{Dt}} + i k_0 \sqrt{D t}\right)\end{aligned}}\end{aligned} \hspace{\stretch{1}}(1.0.24)

It is certainly ugly looking! We see that the oscillation die off is dependent on the $\exp( -k_0^2 D t)$ term. In time

\begin{aligned}t = \frac{1}{{k_0^2 D}},\end{aligned} \hspace{\stretch{1}}(1.0.25)

that oscillation dies away to $1/e$ of its initial amplitude. This dispersion is plotted at times $t = 10^{-5}$ and $t = 1/(k_0^2 D)$ for $L = D = 1$, $k_0 = 8 \pi$ and $A = 1/2$ in (Fig7).

Fig7: Initial time distribution and dispersion of the oscillatory portion to 1/e of initial amplitude

Similar to the individual plots of $P_{\mathrm{rect}}(x, t)$ and $P_{\mathrm{sin}}(x, t)$ above, we plot the time evolution of the total probability dispersion $P(x, t)$ in (Fig8). We see in the plots above that the rectangular portion of this distribution will also continue to flatten over time after most of the oscillation has also died off.

Fig8: Diffusion of uniform but oscillating probability distribution

An easier solution for the sinusoidal part

After working this problem, talking with classmates about how they solved it (because I was sure I’d done this windowed oscillating distribution the hard way), I now understand what was meant by “ignore boundary effects”. That is, ignore the boundary effects in the sinusoid portion of the distribution. I didn’t see how we could ignore the boundary effects because doing so would make the sine Fourier transform non-convergent. Ignoring pesky ideas like convergence we can “approximate” the Fourier transform of the windowed sine as

\begin{aligned}\tilde{P}_{\mathrm{sin}}(k) &\approx A \int_{-\infty}^\infty \sin (k_0 x) e^{-i k x} dx \\ &= \frac{A }{2 i} \int_{-\infty}^\infty \left(e^{i (k_0 - k) x} -e^{-i (k_0 + k) x} \right)dx \\ &= \frac{A \pi}{i} \left( \delta(k - k_0) - \delta(k + k_0)\right)\end{aligned} \hspace{\stretch{1}}(1.0.26)

Now we can inverse Fourier transform the diffusion result with ease since we’ve got delta functions. That is

\begin{aligned}P_{\mathrm{sin}}(x, t) &\approx \frac{1}{{2 \pi}} \frac{A \pi}{i} \int\left( \delta(k - k_0) - \delta(k + k_0)\right)e^{-D k^2 t} e^{i k x} dk \\ &= e^{-D k_0^2 t} \frac{ e^{i k_0 x} - e^{-i k_0 x}}{2 i} \\ &= e^{-D k_0^2 t} \sin( k_0 x )\end{aligned} \hspace{\stretch{1}}(1.0.27)

## Question: Generating function

The Fourier transform of the probability distribution defined above $\tilde{P}(k)$ is called the “generating function” of the distribution. Show that the $n$-th derivative of this generating function $\partial^n P(k)/\partial k^n$ at the origin $k = 0$ is related to the $n$-th moment of the distribution function defined via $\left\langle{{x^n}}\right\rangle = \int dx P(x) x^n$. We will later see that the “partition function” in statistical mechanics is closely related to this concept of a generating function, and derivatives of this partition function can be related to thermodynamic averages of various observables.

\begin{aligned}{\left.{{ \frac{\partial^n}{\partial k^n} \tilde{P}(k) }}\right\vert}_{{k = 0}} &= {\left.{{ \frac{\partial^n}{\partial k^n} \left( \int_{-\infty}^\infty dx P(x) \exp\left( -i k x \right)\right)}}\right\vert}_{{k = 0}} \\ &= {\left.{{ \left(\int_{-\infty}^\infty dx P(x) (-i x)^n\exp\left( -i k x \right)\right)}}\right\vert}_{{k = 0}} \\ &= (-i)^n \int_{-\infty}^\infty dx P(x) x^n \\ &= (-i)^n \left\langle{{x^n}}\right\rangle\end{aligned} \hspace{\stretch{1}}(1.0.26)

## My submission for PHY356 (Quantum Mechanics I) Problem Set II.

Posted by peeterjoot on November 16, 2010

# Problem 1.

A particle of mass $m$ is free to move along the x-direction such that $V(X)=0$. Express the time evolution operator $U(t,t_0)$ defined by Eq. (2.166) using the momentum eigenstates ${\lvert {p} \rangle}$ with delta-function normalization. Find ${\langle {x} \rvert} U(t,t0) {\lvert {x'} \rangle}$, where ${\lvert {x} \rangle}$ and ${\lvert {x'} \rangle}$ are position eigenstates. What is the physical meaning of this expression?

## Momentum matrix element.

We can expand the time evolution operator in series

\begin{aligned}U(t,t_0) &= e^{-i H(t-t_0)/\hbar} \\ &= e^{ -i P^2 (t-t_0)/ 2m \hbar } \\ &= 1 + \sum_{k=1}^\infty \frac{1}{{k!}} \left( -i \frac{P^2 (t-t_0)}{2m \hbar} \right)^k.\end{aligned}

We can now evaluate the momentum matrix element ${\langle {p} \rvert} U(t,t_0) {\lvert {p'} \rangle}$, which will essentially require the value of ${\langle {p} \rvert} P^{2k} {\lvert {p'} \rangle}$. That is

\begin{aligned}{\langle {p} \rvert} P^{2k} {\lvert {p'} \rangle}&= {\langle {p} \rvert} P^{2k-1} P {\lvert {p'} \rangle} \\ &= {\langle {p} \rvert} P^{2k-1} {\lvert {p'} \rangle} p' \\ &= \cdots \\ &= \left\langle{{p}} \vert {{p'}}\right\rangle (p')^{2k}.\end{aligned}

The momentum matrix element is therefore reduced to

\begin{aligned}{\langle {p} \rvert} U(t,t_0) {\lvert {p'} \rangle}&=\left\langle{{p}} \vert {{p'}}\right\rangle \exp\left( -i \frac{p^2 (t-t_0)}{2m \hbar} \right)= \delta(p-p') \exp\left( -i \frac{p^2 (t-t_0)}{2m \hbar} \right)\end{aligned} \hspace{\stretch{1}}(1.1)

## Position matrix element.

For the position matrix element we have a similar sum

\begin{aligned}{\langle {x} \rvert} U(t,t_0) {\lvert {x'} \rangle} &= \left\langle{{x}} \vert {{x'}}\right\rangle + \sum_{k=1}^\infty \frac{1}{{k!}} {\langle {x} \rvert} \left( -i \frac{P^2 (t-t_0)}{2m \hbar} \right)^k {\lvert {x'} \rangle},\end{aligned}

and require ${\langle {x} \rvert} P^{2k} {\lvert {x'} \rangle}$ to continue. That is

\begin{aligned}{\langle {x} \rvert} P^{2k} {\lvert {x'} \rangle}&=\int dx''{\langle {x} \rvert} P^{2k-1} {\lvert {x''} \rangle}{\langle {x''} \rvert} P {\lvert {x'} \rangle} \\ &=\int dx''{\langle {x} \rvert} P^{2k-1} {\lvert {x''} \rangle} \delta(x''-x') (-i\hbar) \frac{d}{dx'} \\ &={\langle {x} \rvert} P^{2k-1} {\lvert {x'} \rangle} (-i\hbar) \frac{d}{dx'} \\ &= \cdots \\ &= \left\langle{{x}} \vert {{x'}}\right\rangle \left( (-i\hbar) \frac{d}{dx'} \right)^{2k}\end{aligned}

Our position matrix element is therefore the differential operator

\begin{aligned}{\langle {x} \rvert} U(t,t_0) {\lvert {x'} \rangle} &=\left\langle{{x}} \vert {{x'}}\right\rangle \exp\left( \frac{i (t-t_0)\hbar}{2m} \frac{d^2}{d{x'}^2} \right)=\delta(x-x') \exp\left( \frac{i (t-t_0)\hbar}{2m} \frac{d^2}{d{x'}^2} \right)\end{aligned} \hspace{\stretch{1}}(1.2)

## Physical interpretation of the position matrix element operator.

Finally, we need to determine the physical meaning of such a matrix element operator.

With the delta function that this matrix element operator includes it really only takes on a meaning with a convolution integral. The simplest such integral would be

\begin{aligned}\int dx' {\langle {x} \rvert} U {\lvert {x'} \rangle} \left\langle{{x'}} \vert {{\phi_0}}\right\rangle &={\langle {x} \rvert} U {\lvert {\phi_0} \rangle} \\ &=\left\langle{{x}} \vert {{\phi(t)}}\right\rangle \\ &=\phi(x,t),\end{aligned}

or

\begin{aligned}\phi(x,t) = \int dx' {\langle {x} \rvert} U {\lvert {x'} \rangle} \phi(x',0)\end{aligned}

The LHS has a physical meaning, and in the absolute square

\begin{aligned}\int_{x_0}^{x_0+ \Delta x} {\left\lvert{\phi(x,t)}\right\rvert}^2 dx,\end{aligned} \hspace{\stretch{1}}(1.3)

provides the probability that the particle will be found in the region $[x_0, x_0+ \Delta x]$.

If we ignore the absolute square requirement and think of the (presumed normalized) wave function $\phi(x,t)$ more loosely as representing a probability directly, then we can in turn give a meaning to the matrix element ${\langle {x} \rvert} U {\lvert {x'} \rangle}$ for the time evolution operator. This provides an operator valued weighting function that provides us with the probability that a particle initially at position $x'$ will be at position $x$ at time $t$. This probability is indirect since we need to absolute square and sum over a finite interval to obtain the probability of finding the particle in that interval.

Observe that the integral on the RHS of 1.3 is a summation over all $x'$, so we can think of this as adding the probabilities that the particle was at each point to arrive at the total probability for finding it at the new location $x$. The time evolution operator matrix element provides the weighting in this conditional probability.

In 1.2 we found that the time evolution operators matrix element is differential operator in the position representation. In the general case this means that this probability weighting is not just numeric since the operation of the matrix element initial time wave function can produce wave functions for additional states. In some special cases, we may find that this weighting is strictly numeric, and one such example would be the Gaussian wave packet $\phi(x',0) = e^{-a{x'}^2}$. Application of the differential operations would then produce polynomial weighted multiples of the original Gaussian. In this special case we would be able to write

\begin{aligned}\phi(x,t) = \int dx' {\langle {x} \rvert} U {\lvert {x'} \rangle} \phi(x',0) = \int dx' K(x,x',t) \phi(x',0) \end{aligned}

Where $K(x,x',t)$ is a polynomial valued function (and is in fact another exponential), and now just provides a numerical weighting for the conditional probability for the particle to move from $x'$ to $x$ in time $t$. In [1], this $K(x,x',t)$ is called the Propagator function. It is perhaps justifiable to also call our similar operator valued matrix element a Propagator.

I got full marks on this assignment. There’s apparently another way to do part of the first question on the position representation, and I was instructed by the TA to see the posted solution, which is not yet available.

# References

[1] R. Liboff. Introductory quantum mechanics. Cambridge: Addison-Wesley Press, Inc, 2003.

## Fourier transformation of the Pauli QED wave equation (Take I).

Posted by peeterjoot on May 29, 2010

# Motivation.

In [1], Feynman writes the Pauli wave equation for a non-relativistic treatment of a mass in a scalar and vector potential electrodynamic field. That is

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}} = \frac{1}{{2m}} \left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right)^2 \Psi + e \phi \Psi\end{aligned} \hspace{\stretch{1}}(1.1)

Is this amenable to Fourier transform solution like so many other PDEs? Let’s give it a try. It would also be interesting to attempt to apply such a computation to see if it is possible to calculate $\left\langle{\mathbf{x}}\right\rangle$, and the first two derivatives of this expectation value. I would guess that this would produce the Lorentz force equation.

# Prep

## Fourier Notation.

Our transform pair will be written

\begin{aligned} \Psi(\mathbf{x}, t) &= \frac{1}{{(\sqrt{2 \pi})^3}} \int \hat{\Psi}(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{k} \end{aligned} \hspace{\stretch{1}}(2.2)

\begin{aligned}\hat{\Psi}(\mathbf{k}, t) &= \frac{1}{{(\sqrt{2 \pi})^3}} \int \Psi(\mathbf{x}, t) e^{-i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{x} \end{aligned} \hspace{\stretch{1}}(2.3)

## Interpretation of the squared momentum operator.

Feynman actually wrote

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}} = \frac{1}{{2m}} \left[\sigma \cdot \left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right)\right]\left[\sigma \cdot \left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right)\right] \Psi + e \phi \Psi\end{aligned} \hspace{\stretch{1}}(2.4)

That $\sigma \cdot$ notation I’m not familiar with, and I’ve written this as a plain old vector square. If $\mathbf{p}$ were not an operator, then this would be a scalar, but as written this actually also includes a bivector term proportional to $\boldsymbol{\nabla} \wedge \mathbf{A} = I \mathbf{B}$. To see that, lets expand this operator explicitly.

\begin{aligned}\left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right) \left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right) \Psi &=\left( \mathbf{p}^2 - \frac{e}{c} ( \mathbf{p} \mathbf{A} + \mathbf{A} \mathbf{p} ) + \frac{e^2}{c^2} \mathbf{A}^2 \right) \Psi \\ &=\left( - \hbar^2 \boldsymbol{\nabla}^2 + \frac{i e \hbar }{c} ( \boldsymbol{\nabla} \mathbf{A} + \mathbf{A} \boldsymbol{\nabla} ) + \frac{e^2}{c^2} \mathbf{A}^2 \right) \Psi \\ \end{aligned}

This anticommutator of the vector potential and the gradient is only a scalar $\mathbf{A}$ has zero divergence. More generally, expanding by chain rules, and using braces to indicate the scope of the differential operations, we have

\begin{aligned}( \boldsymbol{\nabla} \mathbf{A} + \mathbf{A} \boldsymbol{\nabla} ) \Psi&=(\boldsymbol{\nabla} \Psi) \mathbf{A} + \mathbf{A} (\boldsymbol{\nabla} \Psi) + (\boldsymbol{\nabla} \mathbf{A}) \Psi \\ &=2 \mathbf{A} \cdot (\boldsymbol{\nabla} \Psi) + (\boldsymbol{\nabla} \cdot \mathbf{A}) \Psi + I (\boldsymbol{\nabla} \times \mathbf{A}) \Psi \\ &=2 \mathbf{A} \cdot (\boldsymbol{\nabla} \Psi) + (\boldsymbol{\nabla} \cdot \mathbf{A}) \Psi + I \mathbf{B} \Psi - I \mathbf{A} \times (\boldsymbol{\nabla} \Psi) \\ \end{aligned}

where $I = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3$ is the spatial unit trivector, and $\mathbf{B} = \boldsymbol{\nabla} \times \mathbf{A}$.

This is assuming $\Psi$ should be treated as a complex valued scalar, and not a complex-like geometric object of any sort. Does this bivector term have physical meaning? Should it be discarded or retained? If we assume discarded, then we really want to write the Pauli equation utilizing an explicit scalar selection, as in

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}} = \frac{1}{{2m}} \left\langle{{ \left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right)^2 }}\right\rangle \Psi + e \phi \Psi.\end{aligned} \hspace{\stretch{1}}(2.5)

Assuming that to be the case, our squared momentum operator takes the form

\begin{aligned}\left\langle{{ \left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right)^2 }}\right\rangle \Psi &=\left( - \hbar^2 \boldsymbol{\nabla}^2 + 2 \frac{i e \hbar }{c}\mathbf{A} \cdot \boldsymbol{\nabla} + \frac{i e \hbar }{c}(\boldsymbol{\nabla} \cdot \mathbf{A}) + \frac{e^2}{c^2} \mathbf{A}^2 \right) \Psi.\end{aligned} \hspace{\stretch{1}}(2.6)

The Pauli equation, written out explicitly in terms of the gradient is then

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}} &= \frac{1}{{2m}} \left( - \hbar^2 \boldsymbol{\nabla}^2 + 2 \frac{i e \hbar }{c}\mathbf{A} \cdot \boldsymbol{\nabla} + \frac{e^2}{c^2} \mathbf{A}^2 \right) \Psi+e \left( \frac{i \hbar }{ 2 m c} (\boldsymbol{\nabla} \cdot \mathbf{A}) + \phi \right) \Psi.\end{aligned} \hspace{\stretch{1}}(2.7)

### Confirmation.

Instead of guessing what Feynman means when he writes Pauli’s equation, it would be better to just check what Pauli says. In [2] he uses the more straightforward notation

\begin{aligned}\frac{1}{{2m}} \sum_{k=1}^3 \left( p_k - \frac{e}{c}A_k \right)^2\end{aligned} \hspace{\stretch{1}}(2.8)

for the vector potential dependent part of the Hamiltonian operator. This is just the scalar part as was guessed.

# Guts

Using the expansion 2.7 of the Pauli equation, and writing $V = \phi + i \hbar (\boldsymbol{\nabla} \cdot \mathbf{A})/ (2 m c)$ for the effective complex potential we have

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}} = \frac{1}{{2m}} \left( - \hbar^2 \boldsymbol{\nabla}^2 + 2 i \hbar \frac{e}{c} \mathbf{A} \cdot \boldsymbol{\nabla} + \frac{e^2}{c^2} \mathbf{A}^2 \right) \Psi + e V \Psi.\end{aligned} \hspace{\stretch{1}}(3.9)

Let’s now apply each of these derivative operations to our assumed Fourier solution $\Psi(\mathbf{x}, t)$ from 2.2. Starting with the Laplacian we have

\begin{aligned}\boldsymbol{\nabla}^2 \Psi(\mathbf{x}, t) &=\frac{1}{{(\sqrt{2 \pi})^3}} \int \hat{\Psi}(\mathbf{k}, t) (i\mathbf{k})^2 e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.10)

For the $\mathbf{A} \cdot \boldsymbol{\nabla}$ operator application we have

\begin{aligned}\mathbf{A} \cdot \boldsymbol{\nabla} \Psi(\mathbf{x}, t) &=\frac{1}{{(\sqrt{2 \pi})^3}} \int \hat{\Psi}(\mathbf{k}, t) (i \mathbf{A} \cdot \mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.11)

Putting both together we have

\begin{aligned}0 &= \frac{1}{{(\sqrt{2 \pi})^3}} \int \left(-i \hbar \frac{\partial {\hat{\Psi}}}{\partial {t}} + \frac{1}{{2m}} \left( \hbar^2 \mathbf{k}^2 - 2 \hbar \frac{e}{c} \mathbf{A} \cdot \mathbf{k} + \frac{e^2}{c^2} \mathbf{A}^2 \right) \hat{\Psi} + e V \hat{\Psi} \right)e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.12)

We can tidy this up slightly by completing the square, yielding

\begin{aligned}0 &= \frac{1}{{(\sqrt{2 \pi})^3}} \int \left(-i \hbar \frac{\partial {\hat{\Psi}}}{\partial {t}} + \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}, t) \right)^2 + e V(\mathbf{x}, t) \right) \hat{\Psi} \right)e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.13)

If this is to be zero for all $(\mathbf{x}, t)$, it seems clear that we need $\hat{\Psi}(\mathbf{k}, t)$ to be the solution of the first order non-linear PDE

\begin{aligned}\frac{\partial {\hat{\Psi}}}{\partial {t}}(\mathbf{k}, t) = \frac{1}{{i \hbar}} \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}, t) \right)^2 + e V(\mathbf{x}, t) \right) \hat{\Psi}(\mathbf{k}, t)\end{aligned} \hspace{\stretch{1}}(3.14)

Somewhere along the way this got a bit confused. Our Fourier transform function is somehow a function of not just wave number, but position, since $\hat{\Psi} = \hat{\Psi}(\mathbf{x}, \mathbf{k}, t)$ by virtue of being a solution to a differential equation involving $\mathbf{A}(\mathbf{x},t)$, and $V(\mathbf{x}, t)$? Can we pretend to not to have noticed this and continue on anyways? Let’s try the further simplification of the system by imposing a constraint of constant time potentials (${\partial {\mathbf{A}}}/{\partial {t}} = {\partial {V}}/{\partial {t}} = 0$). That allows for direct integration of the wave function’s Fourier transform

\begin{aligned}\hat{\Psi}(\mathbf{k}, t) = \hat{\Psi}(\mathbf{k}, 0) \exp\left(\frac{1}{{i \hbar}} \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A} \right)^2 + e V \right) t\right).\end{aligned} \hspace{\stretch{1}}(3.15)

And inverse transforming this

\begin{aligned}\Psi(\mathbf{x}, t) &= \frac{1}{{(\sqrt{2 \pi})^3}} \int \hat{\Psi}(\mathbf{k}, 0) \exp\left(\frac{1}{{i \hbar}} \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}) \right)^2 + e V(\mathbf{x}) \right) t+ i \mathbf{k} \cdot \mathbf{x}\right)d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.16)

By inserting the inverse Fourier transform of $\hat{\Psi}(\mathbf{k}, 0)$, we have the time evolution of the wave function as a convolution integral

\begin{aligned}\Psi(\mathbf{x}, t) &= \frac{1}{{(2 \pi)^3}} \int \Psi(\mathbf{x}', 0) \exp\left(\frac{1}{{i \hbar}} \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}) \right)^2 + e V(\mathbf{x}) \right) t+ i \mathbf{k} \cdot (\mathbf{x} - \mathbf{x}')\right)d^3 \mathbf{k} d^3 \mathbf{x}'.\end{aligned} \hspace{\stretch{1}}(3.17)

Splitting out the convolution kernel, this takes a slightly tidier form

\begin{aligned}\Psi(\mathbf{x}, t) &= \int \hat{U}(\mathbf{x}, \mathbf{x}', t) \Psi(\mathbf{x}', 0) d^3 \mathbf{x}' \\ \hat{U}(\mathbf{x}, \mathbf{x}', t) &= \frac{1}{{(2 \pi)^3}} \int\exp\left(\frac{1}{{i \hbar}} \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}) \right)^2 + e V(\mathbf{x}) \right) t+ i \mathbf{k} \cdot (\mathbf{x} - \mathbf{x}')\right)d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.18)

## Verification attempt.

If we apply the Pauli equation 1.1 to 3.18 does it produce the correct answer?

For the LHS we have

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}} &=\int \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}) \right)^2 + e V(\mathbf{x}) \right) \hat{U}(\mathbf{x}, \mathbf{x}', t) \Psi(\mathbf{x}', 0) d^3 \mathbf{x}',\end{aligned} \hspace{\stretch{1}}(3.20)

but for the RHS we have

\begin{aligned}&\left( \frac{1}{{2m}} \left\langle{{(\mathbf{p} - \frac{e}{c}\mathbf{A})^2}}\right\rangle + e \phi \right) \Psi=\int d^3 \mathbf{x}'\hat{U}(\mathbf{x}, \mathbf{x}', t) \Psi(\mathbf{x}', 0) \\ &\qquad\left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}) \right)^2 + e V(\mathbf{x}) +\frac{t}{2m i \hbar} \left( - \hbar^2 \boldsymbol{\nabla}^2 + 2 \frac{i e \hbar }{c} \mathbf{A} \cdot \boldsymbol{\nabla} \right)\left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}) \right)^2 + e V(\mathbf{x})\right)\right) \end{aligned} \hspace{\stretch{1}}(3.21)

So if it were not for the spatial dependence of $\mathbf{A}$ and $\phi$, we would have LHS equal to the RHS. It appears that ignoring the odd $\mathbf{x}$ dependence in the $\hat{\Psi}$ differential equation definitely leads to trouble, and only works for constant potential distributions, a rather boring special case.