Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Posts Tagged ‘fourier transform’

Exponential solutions to second order linear system

Posted by peeterjoot on October 20, 2013

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Motivation

We’re discussing specific forms to systems of coupled linear differential equations, such as a loop of “spring” connected masses (i.e. atoms interacting with harmonic oscillator potentials) as sketched in fig. 1.1.

Fig 1.1: Three springs loop

 

Instead of assuming a solution, let’s see how far we can get attacking this problem systematically.

Matrix methods

Suppose that we have a set of N masses constrained to a circle interacting with harmonic potentials. The Lagrangian for such a system (using modulo N indexing) is

\begin{aligned}\mathcal{L} = \frac{1}{{2}} \sum_{k = 0}^{2} m_k \dot{u}_k^2 - \frac{1}{{2}} \sum_{k = 0}^2 \kappa_k \left( u_{k+1} - u_k \right)^2.\end{aligned} \hspace{\stretch{1}}(1.1)

The force equations follow directly from the Euler-Lagrange equations

\begin{aligned}0 = \frac{d{{}}}{dt} \frac{\partial {\mathcal{L}}}{\partial {\dot{u}_{n, \alpha}}}- \frac{\partial {\mathcal{L}}}{\partial {u_{n, \alpha}}}.\end{aligned} \hspace{\stretch{1}}(1.2)

For the simple three particle system depicted above, this is

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m_0 \dot{u}_0^2 +\frac{1}{{2}} m_1 \dot{u}_1^2 +\frac{1}{{2}} m_2 \dot{u}_2^2 - \frac{1}{{2}} \kappa_0 \left( u_1 - u_0 \right)^2- \frac{1}{{2}} \kappa_1 \left( u_2 - u_1 \right)^2- \frac{1}{{2}} \kappa_2 \left( u_0 - u_2 \right)^2,\end{aligned} \hspace{\stretch{1}}(1.3)

with equations of motion

\begin{aligned}\begin{aligned}0 &= m_0 \dot{d}{u}_0 + \kappa_0 \left( u_0 - u_1 \right) + \kappa_2 \left( u_0 - u_2 \right) \\ 0 &= m_1 \dot{d}{u}_1 + \kappa_1 \left( u_1 - u_2 \right) + \kappa_0 \left( u_1 - u_0 \right) \\ 0 &= m_2 \dot{d}{u}_2 + \kappa_2 \left( u_2 - u_0 \right) + \kappa_1 \left( u_2 - u_1 \right),\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.4)

Let’s partially non-dimensionalize this. First introduce average mass \bar{{m}} and spring constants \bar{{\kappa}}, and rearrange slightly

\begin{aligned}\begin{aligned}\frac{\bar{{m}}}{\bar{{k}}} \dot{d}{u}_0 &= -\frac{\kappa_0 \bar{{m}}}{\bar{{k}} m_0} \left( u_0 - u_1 \right) - \frac{\kappa_2 \bar{{m}}}{\bar{{k}} m_0} \left( u_0 - u_2 \right) \\ \frac{\bar{{m}}}{\bar{{k}}} \dot{d}{u}_1 &= -\frac{\kappa_1 \bar{{m}}}{\bar{{k}} m_1} \left( u_1 - u_2 \right) - \frac{\kappa_0 \bar{{m}}}{\bar{{k}} m_1} \left( u_1 - u_0 \right) \\ \frac{\bar{{m}}}{\bar{{k}}} \dot{d}{u}_2 &= -\frac{\kappa_2 \bar{{m}}}{\bar{{k}} m_2} \left( u_2 - u_0 \right) - \frac{\kappa_1 \bar{{m}}}{\bar{{k}} m_2} \left( u_2 - u_1 \right).\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.5)

With

\begin{aligned}\tau = \sqrt{\frac{\bar{{k}}}{\bar{{m}}}} t = \Omega t\end{aligned} \hspace{\stretch{1}}(1.0.6.6)

\begin{aligned}\mathbf{u} = \begin{bmatrix}u_0 \\ u_1 \\ u_2\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(1.0.6.6)

\begin{aligned}B = \begin{bmatrix}-\frac{\kappa_0 \bar{{m}}}{\bar{{k}} m_0} - \frac{\kappa_2 \bar{{m}}}{\bar{{k}} m_0} &\frac{\kappa_0 \bar{{m}}}{\bar{{k}} m_0} &\frac{\kappa_2 \bar{{m}}}{\bar{{k}} m_0} \\ \frac{\kappa_0 \bar{{m}}}{\bar{{k}} m_1} &-\frac{\kappa_1 \bar{{m}}}{\bar{{k}} m_1} - \frac{\kappa_0 \bar{{m}}}{\bar{{k}} m_1} &\frac{\kappa_1 \bar{{m}}}{\bar{{k}} m_1} \\ \frac{\kappa_2 \bar{{m}}}{\bar{{k}} m_2} & \frac{\kappa_1 \bar{{m}}}{\bar{{k}} m_2} &-\frac{\kappa_2 \bar{{m}}}{\bar{{k}} m_2} - \frac{\kappa_1 \bar{{m}}}{\bar{{k}} m_2} \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.6.6)

Our system takes the form

\begin{aligned}\frac{d^2 \mathbf{u}}{d\tau^2} = B \mathbf{u}.\end{aligned} \hspace{\stretch{1}}(1.0.6.6)

We can at least theoretically solve this in a simple fashion if we first convert it to a first order system. We can do that by augmenting our vector of displacements with their first derivatives

\begin{aligned}\mathbf{w} =\begin{bmatrix}\mathbf{u} \\ \frac{d \mathbf{u}}{d\tau} \end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.6.6)

So that

\begin{aligned}\frac{d \mathbf{w}}{d\tau} =\begin{bmatrix}0 & I \\ B & 0\end{bmatrix} \mathbf{w}= A \mathbf{w}.\end{aligned} \hspace{\stretch{1}}(1.0.9)

Now the solution is conceptually trivial

\begin{aligned}\mathbf{w} = e^{A \tau} \mathbf{w}_\circ.\end{aligned} \hspace{\stretch{1}}(1.0.9)

We are however, faced with the task of exponentiating the matrix A. All the powers of this matrix A will be required, but they turn out to be easy to calculate

\begin{aligned}{\begin{bmatrix}0 & I \\ B & 0\end{bmatrix} }^2=\begin{bmatrix}0 & I \\ B & 0\end{bmatrix} \begin{bmatrix}0 & I \\ B & 0\end{bmatrix} =\begin{bmatrix}B & 0 \\ 0 & B\end{bmatrix} \end{aligned} \hspace{\stretch{1}}(1.0.11a)

\begin{aligned}{\begin{bmatrix}0 & I \\ B & 0\end{bmatrix} }^3=\begin{bmatrix}B & 0 \\ 0 & B\end{bmatrix} \begin{bmatrix}0 & I \\ B & 0\end{bmatrix} =\begin{bmatrix}0 & B \\ B^2 & 0\end{bmatrix} \end{aligned} \hspace{\stretch{1}}(1.0.11b)

\begin{aligned}{\begin{bmatrix}0 & I \\ B & 0\end{bmatrix} }^4=\begin{bmatrix}0 & B \\ B^2 & 0\end{bmatrix} \begin{bmatrix}0 & I \\ B & 0\end{bmatrix} =\begin{bmatrix}B^2 & 0 \\ 0 & B^2\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.11c)

allowing us to write out the matrix exponential

\begin{aligned}e^{A \tau} = \sum_{k = 0}^\infty \frac{\tau^{2k}}{(2k)!} \begin{bmatrix}B^k & 0 \\ 0 & B^k\end{bmatrix}+\sum_{k = 0}^\infty \frac{\tau^{2k + 1}}{(2k + 1)!} \begin{bmatrix}0 & B^k \\ B^{k+1} & 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.11c)

Case I: No zero eigenvalues

Provided that B has no zero eigenvalues, we could factor this as

\begin{aligned}\begin{bmatrix}0 & B^k \\ B^{k+1} & 0\end{bmatrix}=\begin{bmatrix}0 & B^{-1/2} \\ B^{1/2} & 0\end{bmatrix}\begin{bmatrix}B^{k + 1/2} & 0 \\ 0 & B^{k + 1/2}\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.11c)

This initially leads us to believe the following, but we’ll find out that the three springs interaction matrix B does have a zero eigenvalue, and we’ll have to be more careful. If there were any such interaction matrices that did not have such a zero we could simply write

\begin{aligned}e^{A \tau} = \sum_{k = 0}^\infty \frac{\tau^{2k}}{(2k)!} \begin{bmatrix}\sqrt{B}^{2k} & 0 \\ 0 & \sqrt{B}^{2k}\end{bmatrix}+\begin{bmatrix}0 & B^{-1/2} \\ B^{1/2} & 0\end{bmatrix}\sum_{k = 0}^\infty \frac{\tau^{2k + 1}}{(2k + 1)!} \begin{bmatrix}\sqrt{B}^{2 k + 1} & 0 \\ 0 & \sqrt{B}^{2 k+1} \end{bmatrix}=\cosh\begin{bmatrix}\sqrt{B} \tau & 0 \\ 0 & \sqrt{B} \tau\end{bmatrix}+ \begin{bmatrix}0 & 1/\sqrt{B} \tau \\ \sqrt{B} \tau & 0\end{bmatrix}\sinh\begin{bmatrix}\sqrt{B} \tau & 0 \\ 0 & \sqrt{B} \tau\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.11c)

This is

\begin{aligned}e^{A \tau}=\begin{bmatrix}\cosh \sqrt{B} \tau & (1/\sqrt{B}) \sinh \sqrt{B} \tau \\ \sqrt{B} \sinh \sqrt{B} \tau & \cosh \sqrt{B} \tau \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.11c)

The solution, written out is

\begin{aligned}\begin{bmatrix}\mathbf{u} \\ \mathbf{u}'\end{bmatrix}=\begin{bmatrix}\cosh \sqrt{B} \tau & (1/\sqrt{B}) \sinh \sqrt{B} \tau \\ \sqrt{B} \sinh \sqrt{B} \tau & \cosh \sqrt{B} \tau \end{bmatrix}\begin{bmatrix}\mathbf{u}_\circ \\ \mathbf{u}_\circ'\end{bmatrix},\end{aligned} \hspace{\stretch{1}}(1.0.11c)

so that

\begin{aligned}\boxed{\mathbf{u} = \cosh \sqrt{B} \tau \mathbf{u}_\circ + \frac{1}{{\sqrt{B}}} \sinh \sqrt{B} \tau \mathbf{u}_\circ'.}\end{aligned} \hspace{\stretch{1}}(1.0.11c)

As a check differentiation twice shows that this is in fact the general solution, since we have

\begin{aligned}\mathbf{u}' = \sqrt{B} \sinh \sqrt{B} \tau \mathbf{u}_\circ + \cosh \sqrt{B} \tau \mathbf{u}_\circ',\end{aligned} \hspace{\stretch{1}}(1.0.11c)

and

\begin{aligned}\mathbf{u}'' = B \cosh \sqrt{B} \tau \mathbf{u}_\circ + \sqrt{B} \sinh \sqrt{B} \tau \mathbf{u}_\circ'= B \left( \cosh \sqrt{B} \tau \mathbf{u}_\circ + \frac{1}{{\sqrt{B}}} \sinh \sqrt{B} \tau \mathbf{u}_\circ' \right)= B \mathbf{u}.\end{aligned} \hspace{\stretch{1}}(1.0.11c)

Observe that this solution is a general solution to second order constant coefficient linear systems of the form we have in eq. 1.5. However, to make it meaningful we do have the additional computational task of performing an eigensystem decomposition of the matrix B. We expect negative eigenvalues that will give us oscillatory solutions (ie: the matrix square roots will have imaginary eigenvalues).

Example: An example diagonalization to try things out

{example:threeSpringLoop:1}{

Let’s do that diagonalization for the simplest of the three springs system as an example, with \kappa_j = \bar{{k}} and m_j = \bar{{m}}, so that we have

\begin{aligned}B = \begin{bmatrix}-2 & 1 & 1 \\ 1 & -2 & 1 \\ 1 & 1 & -2 \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.20)

A orthonormal eigensystem for B is

\begin{aligned}\left\{\mathbf{e}_{-3, 1}, \mathbf{e}_{-3, 2}, \mathbf{e}_{0, 1} \right\}=\left\{\frac{1}{{\sqrt{6}}}\begin{bmatrix} -1 \\ -1 \\ 2 \end{bmatrix},\frac{1}{{\sqrt{2}}}\begin{bmatrix} -1 \\ 1 \\ 0 \end{bmatrix},\frac{1}{{\sqrt{3}}}\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}\right\}.\end{aligned} \hspace{\stretch{1}}(1.21)

With

\begin{aligned}\begin{aligned}U &= \frac{1}{{\sqrt{6}}}\begin{bmatrix} -\sqrt{3} & -1 & \sqrt{2} \\ 0 & 2 & \sqrt{2} \\ \sqrt{3} & -1 & \sqrt{2} \end{bmatrix} \\ D &= \begin{bmatrix}-3 & 0 & 0 \\ 0 & -3 & 0 \\ 0 & 0 & 0 \\ \end{bmatrix}\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.22)

We have

\begin{aligned}B = U D U^\text{T},\end{aligned} \hspace{\stretch{1}}(1.23)

We also find that B and its root are intimately related in a surprising way

\begin{aligned}\sqrt{B} = \sqrt{3} iU \begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \\ \end{bmatrix}U^\text{T}=\frac{1}{{\sqrt{3} i}} B.\end{aligned} \hspace{\stretch{1}}(1.24)

We also see, unfortunately that B has a zero eigenvalue, so we can’t compute 1/\sqrt{B}. We’ll have to back and up and start again differently.
}

Case II: allowing for zero eigenvalues

Now that we realize we have to deal with zero eigenvalues, a different approach suggests itself. Instead of reducing our system using a Hamiltonian transformation to a first order system, let’s utilize that diagonalization directly. Our system is

\begin{aligned}\mathbf{u}'' = B \mathbf{u} = U D U^{-1} \mathbf{u},\end{aligned} \hspace{\stretch{1}}(1.25)

where D = [ \lambda_i \delta_{ij} ] and

\begin{aligned}\left( U^{-1} \mathbf{u} \right)'' = D \left( U^{-1} \mathbf{u} \right).\end{aligned} \hspace{\stretch{1}}(1.26)

Let

\begin{aligned}\mathbf{z} = U^{-1} \mathbf{u},\end{aligned} \hspace{\stretch{1}}(1.27)

so that our system is just

\begin{aligned}\mathbf{z}'' = D \mathbf{z},\end{aligned} \hspace{\stretch{1}}(1.28)

or

\begin{aligned}z_i'' = \lambda_i z_i.\end{aligned} \hspace{\stretch{1}}(1.29)

This is N equations, each decoupled and solvable by inspection. Suppose we group the eigenvalues into sets \{ \lambda_n < 0, \lambda_p > 0, \lambda_z = 0 \}. Our solution is then

\begin{aligned}\mathbf{z} = \sum_{ \lambda_n < 0, \lambda_p > 0, \lambda_z = 0}\left( a_n \cos \sqrt{-\lambda_n} \tau + b_n \sin \sqrt{-\lambda_n} \tau \right)\mathbf{e}_n+\left( a_p \cosh \sqrt{\lambda_p} \tau + b_p \sinh \sqrt{\lambda_p} \tau \right)\mathbf{e}_p+\left( a_z + b_z \tau \right) \mathbf{e}_z.\end{aligned} \hspace{\stretch{1}}(1.30)

Transforming back to lattice coordinates using \mathbf{u} = U \mathbf{z}, we have

\begin{aligned}\mathbf{u} = \sum_{ \lambda_n < 0, \lambda_p > 0, \lambda_z = 0}\left( a_n \cos \sqrt{-\lambda_n} \tau + b_n \sin \sqrt{-\lambda_n} \tau \right)U \mathbf{e}_n+\left( a_p \cosh \sqrt{\lambda_p} \tau + b_p \sinh \sqrt{\lambda_p} \tau \right)U \mathbf{e}_p+\left( a_z + b_z \tau \right) U \mathbf{e}_z.\end{aligned} \hspace{\stretch{1}}(1.31)

We see that the zero eigenvalues integration terms have no contribution to the lattice coordinates, since U \mathbf{e}_z = \lambda_z \mathbf{e}_z = 0, for all \lambda_z = 0.

If U = [ \mathbf{e}_i ] are a set of not necessarily orthonormal eigenvectors for B, then the vectors \mathbf{f}_i, where \mathbf{e}_i \cdot \mathbf{f}_j = \delta_{ij} are the reciprocal frame vectors. These can be extracted from U^{-1} = [ \mathbf{f}_i ]^\text{T} (i.e., the rows of U^{-1}). Taking dot products between \mathbf{f}_i with \mathbf{u}(0) = \mathbf{u}_\circ and \mathbf{u}'(0) = \mathbf{u}'_\circ, provides us with the unknown coefficients a_n, b_n

\begin{aligned}\mathbf{u}(\tau)= \sum_{ \lambda_n < 0, \lambda_p > 0 }\left( (\mathbf{u}_\circ \cdot \mathbf{f}_n) \cos \sqrt{-\lambda_n} \tau + \frac{\mathbf{u}_\circ' \cdot \mathbf{f}_n}{\sqrt{-\lambda_n} } \sin \sqrt{-\lambda_n} \tau \right)\mathbf{e}_n+\left( (\mathbf{u}_\circ \cdot \mathbf{f}_p) \cosh \sqrt{\lambda_p} \tau + \frac{\mathbf{u}_\circ' \cdot \mathbf{f}_p}{\sqrt{-\lambda_p} } \sinh \sqrt{\lambda_p} \tau \right)\mathbf{e}_p.\end{aligned} \hspace{\stretch{1}}(1.32)

Supposing that we constrain ourself to looking at just the oscillatory solutions (i.e. the lattice does not shake itself to pieces), then we have

\begin{aligned}\boxed{\mathbf{u}(\tau)= \sum_{ \lambda_n < 0 }\left( \sum_j \mathbf{e}_{n,j} \mathbf{f}_{n,j}^\text{T} \right)\left( \mathbf{u}_\circ \cos \sqrt{-\lambda_n} \tau + \frac{\mathbf{u}_\circ' }{\sqrt{-\lambda_n} } \sin \sqrt{-\lambda_n} \tau \right).}\end{aligned} \hspace{\stretch{1}}(1.33)

Eigenvectors for eigenvalues that were degenerate have been explicitly enumerated here, something previously implied. Observe that the dot products of the form (\mathbf{a} \cdot \mathbf{f}_i) \mathbf{e}_i have been put into projector operator form to group terms more nicely. The solution can be thought of as a weighted projector operator working as a time evolution operator from the initial state.

Example: Our example interaction revisited

{example:threeSpringLoop:2}{

Recall that we had an orthonormal basis for the \lambda = -3 eigensubspace for the interaction example of eq. 1.20 again, so \mathbf{e}_{-3,i} = \mathbf{f}_{-3, i}. We can sum \mathbf{e}_{-3,1} \mathbf{e}_{-3, 1}^\text{T} + \mathbf{e}_{-3,2} \mathbf{e}_{-3, 2}^\text{T} to find

\begin{aligned}\mathbf{u}(\tau)= \frac{1}{{3}}\begin{bmatrix}2 & -1 & -1 \\ -1 & 2 & -1 \\ -1 & -1 & 2\end{bmatrix}\left( \mathbf{u}_\circ \cos \sqrt{3} \tau + \frac{ \mathbf{u}_\circ' }{\sqrt{3} } \sin \sqrt{3} \tau \right).\end{aligned} \hspace{\stretch{1}}(1.34)

The leading matrix is an orthonormal projector of the initial conditions onto the eigen subspace for \lambda_n = -3. Observe that this is proportional to B itself, scaled by the square of the non-zero eigenvalue of \sqrt{B}. From this we can confirm by inspection that this is a solution to \mathbf{u}'' = B \mathbf{u}, as desired.
}

Fourier transform methods

Let’s now try another item from our usual toolbox on these sorts of second order systems, the Fourier transform. For a one variable function of time let’s write the transform pair as

\begin{aligned}x(t) = \int_{-\infty}^{\infty} \tilde{x}(\omega) e^{-i \omega t} d\omega\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

\begin{aligned}\tilde{x}(\omega) = \frac{1}{{2\pi}} \int_{-\infty}^{\infty} x(t) e^{i \omega t} dt\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

One mass harmonic oscillator

The simplest second order system is that of the harmonic oscillator

\begin{aligned}0 = \dot{d}{x}(t) + \omega_\circ^2 x.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Application of the transform gives

\begin{aligned}0 = \left( \frac{d^2}{dt^2} + \omega_\circ^2 \right)\int_{-\infty}^{\infty} \tilde{x}(\omega) e^{-i \omega t} d\omega= \int_{-\infty}^{\infty} \left( -\omega^2 + \omega_\circ^2 \right)\tilde{x}(\omega) e^{-i \omega t} d\omega.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

We clearly have a constraint that is a function of frequency, but one that has to hold for all time. Let’s transform this constraint to the frequency domain to consider that constraint independent of time.

\begin{aligned}0 =\frac{1}{{2 \pi}}\int_{-\infty}^{\infty} dt e^{ i \omega t}\int_{-\infty}^{\infty} \left( -{\omega'}^2 + \omega_\circ^2 \right)\tilde{x}(\omega') e^{-i \omega' t} d\omega'=\int_{-\infty}^{\infty} d\omega'\left( -{\omega'}^2 + \omega_\circ^2 \right)\tilde{x}(\omega') \frac{1}{{2 \pi}}\int_{-\infty}^{\infty} dt e^{ i (\omega -\omega') t}=\int_{-\infty}^{\infty} d\omega'\left( -{\omega'}^2 + \omega_\circ^2 \right)\tilde{x}(\omega') \delta( \omega - \omega' )=\left( -{\omega}^2 + \omega_\circ^2 \right)\tilde{x}(\omega).\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

How do we make sense of this?

Since \omega is an integration variable, we can’t just mandate that it equals the constant driving frequency \pm \omega_\circ. It’s clear that we require a constraint on the transform \tilde{x}(\omega) as well. As a trial solution, imagine that

\begin{aligned}\tilde{x}(\omega) = \left\{\begin{array}{l l}\tilde{x}_\circ & \quad \mbox{if latex \left\lvert {\omega – \pm \omega_\circ} \right\rvert < \omega_{\text{cutoff}}$} \\ 0 & \quad \mbox{otherwise}\end{array}\right.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)$

This gives us

\begin{aligned}0 = \tilde{x}_\circ\int_{\pm \omega_\circ -\omega_{\text{cutoff}}}^{\pm \omega_\circ +\omega_{\text{cutoff}}}\left( \omega^2 - \omega_\circ^2 \right)e^{-i \omega t} d\omega.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Now it is clear that we can satisfy our constraint only if the interval [\pm \omega_\circ -\omega_{\text{cutoff}}, \pm \omega_\circ + \omega_{\text{cutoff}}] is made infinitesimal. Specifically, we require both a \omega^2 = \omega_\circ^2 constraint and that the transform \tilde{x}(\omega) have a delta function nature. That is

\begin{aligned}\tilde{x}(\omega) = A \delta(\omega - \omega_\circ)+ B \delta(\omega + \omega_\circ).\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Substitution back into our transform gives

\begin{aligned}x(t) = A e^{-i \omega_\circ t}+ B e^{i \omega_\circ t}.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

We can verify quickly that this satisfies our harmonic equation \dot{d}{x} = -\omega_\circ x.

Two mass harmonic oscillator

Having applied the transform technique to the very simplest second order system, we can now consider the next more complex system, that of two harmonically interacting masses (i.e. two masses on a frictionless spring).

F1

Our system is described by

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m_1 \dot{u}_1^2+ \frac{1}{{2}} m_2 \dot{u}_2^2-\frac{1}{{2}} \kappa \left( u_2 - u_1 \right)^2,\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

and the pair of Euler-Lagrange equations

\begin{aligned}0 = \frac{d}{dt} \frac{\partial {\mathcal{L}}}{\partial {\dot{u}_i}}- \frac{\partial {\mathcal{L}}}{\partial {u_i}}.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

The equations of motion are

\begin{aligned}\begin{aligned}0 &= m_1 \dot{d}{u}_1 + \kappa \left( u_1 - u_2 \right) \\ 0 &= m_2 \dot{d}{u}_2 + \kappa \left( u_2 - u_1 \right) \end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Let

\begin{aligned}\begin{aligned}u_1(t) &= \int_{-\infty}^\infty \tilde{u}_1(\omega) e^{-i \omega t} d\omega \\ u_2(t) &= \int_{-\infty}^\infty \tilde{u}_2(\omega) e^{-i \omega t} d\omega.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Insertion of these transform pairs into our equations of motion produces a pair of simultaneous integral equations to solve

\begin{aligned}\begin{aligned}0 &= \int_{-\infty}^\infty \left( \left( -m_1 \omega^2 + \kappa \right) \tilde{u}_1(\omega) - \kappa \tilde{u}_2(\omega) \right)e^{-i \omega t} d\omega \\ 0 &= \int_{-\infty}^\infty \left( \left( -m_2 \omega^2 + \kappa \right) \tilde{u}_2(\omega) - \kappa \tilde{u}_1(\omega) \right)e^{-i \omega t} d\omega.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

As with the single spring case, we can decouple these equations with an inverse transformation operation \int e^{i \omega' t}/2\pi, which gives us (after dropping primes)

\begin{aligned}0 = \begin{bmatrix}\left( -m_1 \omega^2 + \kappa \right) &- \kappa \\ - \kappa &\left( -m_2 \omega^2 + \kappa \right) \end{bmatrix}\begin{bmatrix}\tilde{u}_1(\omega) \\ \tilde{u}_2(\omega)\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Taking determinants gives us the constraint on the frequency

\begin{aligned}0 = \left( -m_1 \omega^2 + \kappa \right) \left( -m_2 \omega^2 + \kappa \right) - \kappa^2=m_1 m_2 \omega^4-(m_1 + m_2) \omega^2 =\omega^2 \left( m_1 m_2 \omega^2 -\kappa (m_1 + m_2) \right).\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Introducing a reduced mass

\begin{aligned}\frac{1}{{\mu}} = \frac{1}{m_1}+\frac{1}{m_2},\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

the pair of solutions are

\begin{aligned}\begin{aligned}\omega^2 &= 0 \\ \omega^2 &=\frac{\kappa}{\mu}\equiv \omega_\circ^2.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

As with the single mass oscillator, we require the functions \tilde{u}{\omega} to also be expressed as delta functions. The frequency constraint and that delta function requirement together can be expressed, for j \in \{0, 1\} as

\begin{aligned}\tilde{u}_j(\omega) = A_{j+} \delta( \omega - \omega_\circ )+ A_{j0} \delta( \omega )+ A_{j-} \delta( \omega + \omega_\circ ).\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

With a transformation back to time domain, we have functions of the form

\begin{aligned}\begin{aligned}u_1(t) &= A_{j+} e^{ -i \omega_\circ t }+ A_{j0} + A_{j-} e^{ i \omega_\circ t } \\ u_2(t) &= B_{j+} e^{ -i \omega_\circ t }+ B_{j0} + B_{j-} e^{ i \omega_\circ t }.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Back insertion of these into the equations of motion, we have

\begin{aligned}\begin{aligned}0 &=-m_1 \omega_\circ^2\left( A_{j+} e^{ -i \omega_\circ t } + A_{j-} e^{ i \omega_\circ t } \right)+ \kappa\left( \left( A_{j+} - B_{j+} \right) e^{ -i \omega_\circ t } + \left( A_{j-} - B_{j-} \right) e^{ i \omega_\circ t } + A_{j0} - B_{j0} \right) \\ 0 &=-m_2 \omega_\circ^2\left( B_{j+} e^{ -i \omega_\circ t } + B_{j-} e^{ i \omega_\circ t } \right)+ \kappa\left( \left( B_{j+} - A_{j+} \right) e^{ -i \omega_\circ t } + \left( B_{j-} - A_{j-} \right) e^{ i \omega_\circ t } + B_{j0} - A_{j0} \right)\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Equality requires identity for all powers of e^{i \omega_\circ t}, or

\begin{aligned}\begin{aligned}0 &= B_{j0} - A_{j0} \\ 0 &= -m_1 \omega_\circ^2 A_{j+} + \kappa \left( A_{j+} - B_{j+} \right) \\ 0 &= -m_1 \omega_\circ^2 A_{j-} + \kappa \left( A_{j-} - B_{j-} \right) \\ 0 &= -m_2 \omega_\circ^2 B_{j+} + \kappa \left( B_{j+} - A_{j+} \right) \\ 0 &= -m_2 \omega_\circ^2 B_{j-} + \kappa \left( B_{j-} - A_{j-} \right),\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

or B_{j0} = A_{j0} and

\begin{aligned}0 =\begin{bmatrix}\kappa -m_1 \omega_\circ^2 & 0 & - \kappa & 0 \\ 0 & \kappa -m_1 \omega_\circ^2 & 0 & - \kappa \\ -\kappa & 0 & \kappa -m_2 \omega_\circ^2 & 0 \\ 0 & -\kappa & 0 & \kappa -m_2 \omega_\circ^2 \end{bmatrix}\begin{bmatrix}A_{j+} \\ A_{j-} \\ B_{j+} \\ B_{j-} \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Observe that

\begin{aligned}\kappa - m_1 \omega_\circ^2=\kappa - m_1 \kappa \left( \frac{1}{{m_1}} + \frac{1}{{m_2}} \right)=\kappa \left( 1 - 1 - \frac{m_1}{m_2} \right)= -\kappa \frac{m_1}{m_2},\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

(with a similar alternate result). We can rewrite eq. 1.0.35.35 as

\begin{aligned}0 =-\kappa\begin{bmatrix}m_1/m_2 & 0 & 1 & 0 \\ 0 & m_1/m_2 & 0 & 1 \\ 1 & 0 & m_2/m_1 & 0 \\ 0 & 1 & 0 & m_2/m_1 \end{bmatrix}\begin{bmatrix}A_{j+} \\ A_{j-} \\ B_{j+} \\ B_{j-} \end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

It’s clear that there’s two pairs of linear dependencies here, so the determinant is zero as expected. We can read off the remaining relations. Our undetermined coefficients are given by

\begin{aligned}\begin{aligned}B_{j0} &= A_{j0} \\ m_1 A_{j+} &= -m_2 B_{j+} \\ m_1 A_{j-} &= -m_2 B_{j-} \end{aligned}\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

\begin{aligned}\boxed{\begin{aligned}u_1(t) &= a+ m_2 b e^{ -i \omega_\circ t }+ m_2 c e^{ i \omega_\circ t } \\ u_2(t) &= a- m_1 b e^{ -i \omega_\circ t }- m_1 c e^{ i \omega_\circ t }.\end{aligned}}\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Observe that the constant term is not really of interest, since it represents a constant displacement of both atoms (just a change of coordinates).

Check:

\begin{aligned}u_1(t) - u_2(t) = + (m_1 + m_2)b e^{ -i \omega_\circ t }+ (m_1 + m_2)c e^{ i \omega_\circ t },\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

\begin{aligned}m_1 \dot{d}{u}_1(t) + \kappa (u_1(t) - u_2(t) )=-m_1 m_2 \omega_\circ^2 \left( b e^{ -i \omega_\circ t } + c e^{ i \omega_\circ t } \right)+(m_1 + m_2) \kappa \left( b e^{ -i \omega_\circ t } + c e^{ i \omega_\circ t } \right)=\left( -m_1 m_2 \omega_\circ^2 + (m_1 + m_2) \kappa \right)\left( b e^{ -i \omega_\circ t } + c e^{ i \omega_\circ t } \right)=\left( -m_1 m_2 \kappa \frac{m_1 + m_2}{m_1 m_2} + (m_1 + m_2) \kappa \right)\left( b e^{ -i \omega_\circ t } + c e^{ i \omega_\circ t } \right)= 0.\end{aligned} \hspace{\stretch{1}}(1.0.35.35)

Reflection

We’ve seen that we can solve any of these constant coefficient systems exactly using matrix methods, however, these will not be practical for large systems unless we have methods to solve for all the non-zero eigenvalues and their corresponding eigenvectors. With the Fourier transform methods we find that our solutions in the frequency domain is of the form

\begin{aligned}\tilde{u}_j(\omega) = \sum a_{kj} \delta( \omega - \omega_{kj} ),\end{aligned} \hspace{\stretch{1}}(1.63)

or in the time domain

\begin{aligned}u(t) = \sum a_{kj} e^{ - i \omega_{kj} t }.\end{aligned} \hspace{\stretch{1}}(1.64)

We assumed exactly this form of solution in class. The trial solution that we used in class factored out a phase shift from a_{kj} of the form of e^{ i q x_n }, but that doesn’t change the underling form of that assumed solution. We have however found a good justification for the trial solution we utilized.

Advertisements

Posted in Math and Physics Learning. | Tagged: , , , , , , , | Leave a Comment »

An updated compilation of notes, for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti

Posted by peeterjoot on March 3, 2013

In A compilation of notes, so far, for ‘PHY452H1S Basic Statistical Mechanics’ I posted a link this compilation of statistical mechanics course notes.

That compilation now all of the following too (no further updates will be made to any of these) :

February 28, 2013 Rotation of diatomic molecules

February 28, 2013 Helmholtz free energy

February 26, 2013 Statistical and thermodynamic connection

February 24, 2013 Ideal gas

February 16, 2013 One dimensional well problem from Pathria chapter II

February 15, 2013 1D pendulum problem in phase space

February 14, 2013 Continuing review of thermodynamics

February 13, 2013 Lightning review of thermodynamics

February 11, 2013 Cartesian to spherical change of variables in 3d phase space

February 10, 2013 n SHO particle phase space volume

February 10, 2013 Change of variables in 2d phase space

February 10, 2013 Some problems from Kittel chapter 3

February 07, 2013 Midterm review, thermodynamics

February 06, 2013 Limit of unfair coin distribution, the hard way

February 05, 2013 Ideal gas and SHO phase space volume calculations

February 03, 2013 One dimensional random walk

February 02, 2013 1D SHO phase space

February 02, 2013 Application of the central limit theorem to a product of random vars

January 31, 2013 Liouville’s theorem questions on density and current

January 30, 2013 State counting

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment »

PHY452H1S Basic Statistical Mechanics. Problem Set 2: Generating functions and diffusion

Posted by peeterjoot on January 26, 2013

[Click here for a PDF of this post with nicer formatting]

Disclaimer

This is an ungraded set of answers to the problems posed.

Question: Diffusion

The usual diffusion equation for the probability density in one dimension is given by

\begin{aligned}\frac{\partial {P}}{\partial {t}}(x, t) = D \frac{\partial^2 {{P}}}{\partial {{x}}^2}(x, t)\end{aligned} \hspace{\stretch{1}}(1.0.1)

where D is the diffusion constant. Define the Fourier components of the probability distribution via

\begin{aligned}P(x, t) = \int_{-\infty}^\infty \frac{dk}{2 \pi} \tilde{P}(k, t) \exp\left( i k x \right)\end{aligned} \hspace{\stretch{1}}(1.0.2a)

\begin{aligned}\tilde{P}(k, t) = \int_{-\infty}^\infty dx P(x, t) \exp\left( -i k x \right)\end{aligned} \hspace{\stretch{1}}(1.0.2b)

This is useful since the diffusion equation is linear in the probability and each Fourier component will evolve independently. Using this, solve the diffusion equation to obtain P(k,t) in Fourier space given the initial \tilde{P}(k,0).

Assuming an initial Gaussian profile

\begin{aligned}P(x, 0) = \frac{1}{{\sqrt{2 \pi \sigma^2}}} \exp\left(-\frac{x^2}{2 \sigma^2}\right),\end{aligned} \hspace{\stretch{1}}(1.3)

obtain the probability density P(x,t) at a later time t. (NB: Fourier transform, get the solution, transform back.) Schematically plot the profile at the initial time and a later time.

A small modulation on top of a uniform value

Let the probability density be proportional to

\begin{aligned}\frac{1}{{L}} + A \sin(k_0 x)\end{aligned} \hspace{\stretch{1}}(1.4)

at an initial time t = 0. Assume this is in a box of large size L, but ignore boundary effects except to note that it will help to normalize the constant piece, assuming the oscillating piece integrates to zero. Also note that we have
to assume A < 1/L to ensure that the probability density is positive. Obtain P(x,t) at a later time t. Roughly how long does the modulation take to decay away? Schematically plot the profile at the initial time and a later time.

Answer

Inserting the transform definitions we have

\begin{aligned}0 &= \left( \frac{\partial {}}{\partial {t}} - D \frac{\partial^2 {{}}}{\partial {{x}}^2} \right) P \\ &= \left( \frac{\partial {}}{\partial {t}} - D \frac{\partial^2 {{}}}{\partial {{x}}^2} \right) \int_{-\infty}^\infty \frac{dk}{2 \pi} \tilde{P}(k, t) \exp\left( i k x \right) \\ &=\int_{-\infty}^\infty \frac{dk}{2 \pi} \left(\frac{\partial {}}{\partial {t}} \tilde{P}(k, t) + k^2 D\tilde{P}(k, t) \right)\exp\left( i k x \right),\end{aligned} \hspace{\stretch{1}}(1.0.5)

We conclude that

\begin{aligned}0 = \tilde{P}(k, t) + k^2 D\tilde{P}(k, t),\end{aligned} \hspace{\stretch{1}}(1.0.6)

or

\begin{aligned}\tilde{P}(k, t) = A(k) e^{-k^2 D t}.\end{aligned} \hspace{\stretch{1}}(1.0.7)

If the Fourier transform of the distribution is constant until time t, so that \tilde{P}(k, t < 0) = \tilde{P}(k, 0), we can write

\begin{aligned}\boxed{\tilde{P}(k, t) = \tilde{P}(k, 0) e^{-k^2 D t}.}\end{aligned} \hspace{\stretch{1}}(1.0.8)

The time evolution of the distributions transform just requires multiplication by the decreasing exponential factor e^{-k^2 D t}.

Propagator for the diffusion equation

We can also use this to express the explicit time evolution of the distribution

\begin{aligned}P(x, t) &= \int_{-\infty}^\infty \frac{dk}{2 \pi} \tilde{P}(k, 0) e^{-k^2 D t}\exp\left( i k x \right) \\ &= \int_{-\infty}^\infty \frac{dk}{2 \pi} \int_{-\infty}^\infty dx' P(x', 0) \exp\left( -i k x' \right)e^{-k^2 D t}\exp\left( i k x \right) \\ &= \int_{-\infty}^\infty dx' P(x', 0) \int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left( -k^2 D t + i k (x - x') \right)\end{aligned} \hspace{\stretch{1}}(1.0.9)

Our distribution time evolution is given by convolve with a propagator function

\begin{aligned}P(x, t) = \int dx' P(x', 0) G(x', x)\end{aligned} \hspace{\stretch{1}}(1.0.10a)

\begin{aligned}G(x', x) =\int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left( -k^2 D t + i k (x - x') \right)\end{aligned} \hspace{\stretch{1}}(1.0.10b)

For t \ge 0 we can complete the square, finding that this propagator is

\begin{aligned}G(x', x) &= \int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left( -k^2 D t + i k (x - x') \right) \\ &= \exp\left( \left(\frac{i (x - x')}{2 \sqrt{D t}} \right)^2 \right)\int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left( - \left(k \sqrt{D t} + \frac{i (x - x')}{2 \sqrt{D t}} \right)^2 \right)\end{aligned} \hspace{\stretch{1}}(1.0.11)

or

\begin{aligned}\boxed{G(x', x) =\frac{1}{\sqrt{4 \pi D t}} \exp\left(-\frac{(x - x')^2}{4 D t}\right).}\end{aligned} \hspace{\stretch{1}}(1.0.12)

A schematic plot of this function as a function of t for fixed x - x' is plotted in (Fig1).

Fig1: Form of the propagator function for the diffusion equation

Gaussian

For the Gaussian of 1.3 we compute the initial time Fourier transform

\begin{aligned}\tilde{P}(k) &= \int_{-\infty}^\infty dx \frac{1}{{\sqrt{2 \pi \sigma^2}}} \exp\left(-\frac{x^2}{2 \sigma^2}-i k x \right) \\ &= \frac{1}{{\sqrt{2 \pi \sigma^2}}} \exp\left(-\left( \frac{\sqrt{ 2 \sigma^2}}{2} k i\right)^2\right)\int_{-\infty}^\infty dx \exp\left(-\left( \frac{x}{\sqrt{2 \sigma^2} } + \frac{\sqrt{ 2 \sigma^2}}{2} k i\right)^2\right) \\ &= \exp\left(-\frac{\sigma^2 k^2}{2}\right).\end{aligned} \hspace{\stretch{1}}(1.0.13)

The time evolution of the generating function is

\begin{aligned}\tilde{P}(k, t) = \exp\left(-\frac{\sigma^2 k^2}{2} - D k^2 t\right),\end{aligned} \hspace{\stretch{1}}(1.0.14)

and we can find our time evolved probability density by inverse transforming

\begin{aligned}P(x, t) &= \int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left(-\frac{\sigma^2 k^2}{2} - D k^2 t + i k x\right) \\ &= \exp\left(i \frac{x}{ 2 \sqrt{\frac{\sigma^2}{2} + D t} }\right)^2\int_{-\infty}^\infty \frac{dk}{2 \pi} \exp\left(-\left(k \sqrt{\frac{\sigma^2}{2} + D t} + i \frac{x}{ 2 \sqrt{\frac{\sigma^2}{2} + D t} }\right)^2\right)\end{aligned} \hspace{\stretch{1}}(1.0.15)

For t \ge 0 this is

\begin{aligned}\boxed{P(x, t) =\frac{1}{{\sqrt{2 \pi \left( \sigma^2 + 2 D t \right) } }}\exp\left(-\frac{x^2}{2 \left( \sigma^2 + 2 D t \right) }\right).}\end{aligned} \hspace{\stretch{1}}(1.0.16)

As a check, we see that this reproduces the t = 0 value as expected. A further check using Mathematica applying the propagator 1.0.12, also finds the same result as this manual calculation.

This is plotted for D = \sigma = 1 in (Fig2) for a couple different times t \ge 0.

Fig2: Gaussian probability density time evolution with diffusion

Boxed constant with small oscillation

The normalization of the distribution depends on the interval boundaries. With the box range given by x \in [a, a + L] we have

\begin{aligned}\int_a^{a + L} dx \left( \frac{1}{{L}} + A \sin( k_0 x) \right) dx=1 - \frac{A}{k_0} \left( \cos( k_0(a + L) ) - \cos( k_0 a ) \right)\end{aligned} \hspace{\stretch{1}}(1.0.17)

With an even range for box x \in [-L/2, L/2] this is unity.

To find the distribution at a later point in time we can utilize the propagator

\begin{aligned}P(x, t) = \int_{-L/2}^{L/2} dx' \frac{1}{{2 \sqrt{\pi D t} }} \left( \frac{1}{{L}} + A \sin( k_0 x' ) \right) \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right)\end{aligned} \hspace{\stretch{1}}(1.0.18)

Let’s write this as

\begin{aligned}P(x, t) = P_{\mathrm{rect}}(x, t) + P_{\mathrm{sin}}(x, t)\end{aligned} \hspace{\stretch{1}}(1.0.19a)

\begin{aligned}P_{\mathrm{rect}}(x, t) =\frac{1}{{2 L \sqrt{\pi D t} }} \int_{-L/2}^{L/2} dx' \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right)\end{aligned} \hspace{\stretch{1}}(1.0.19b)

\begin{aligned}P_{\mathrm{sin}}(x, t)=\frac{A}{2 \sqrt{\pi D t} } \int_{-L/2}^{L/2} dx' \sin( k_0 x' ) \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right)\end{aligned} \hspace{\stretch{1}}(1.0.19c)

Applying a u = (x' - x)/\sqrt{4 D t} change of variables for the first term, we can reduce it to a difference of error functions

\begin{aligned}P_{\mathrm{rect}}(x, t) &= \frac{1}{{L}} \int_{-L/2}^{L/2} dx' \frac{1}{{2 \sqrt{\pi D t} }} \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right) \\ &= \frac{1}{{L \sqrt{\pi}}}\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{-u^2} \\ &= \frac{1}{{2 L}} \left( \text{erf}\left( \frac{L/2 -x}{2 \sqrt{D t}} \right)-\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} \right)\right)\end{aligned} \hspace{\stretch{1}}(1.0.20)

Following Mathematica, lets introduce a two argument error function for the difference between two points

\begin{aligned}\text{erf}(z_0, z_1) \equiv \text{erf}(z_1) - \text{erf}(z_0).\end{aligned} \hspace{\stretch{1}}(1.0.21)

Using that our rectangular function’s time evolution can be written

\begin{aligned}P_{\mathrm{rect}}(x, t)=\frac{1}{{2 L}} \text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} ,\frac{L/2 -x}{2 \sqrt{D t}} \right)\end{aligned} \hspace{\stretch{1}}(1.0.22)

For L = D = 1, and t = 10^{-8}, this is plotted in (Fig3). Somewhat surprisingly, this difference of error functions does appear to result in a rectangular function for small t.

Fig3: Rectangular part of the probability distribution for very small t

The time evolution of this non-oscillation part of the probability distribution is plotted as a function of both t and x in (Fig4).

Fig4: Time evolution of the rectangular part of the probability distribution

For the sine piece we can also find a solution in terms of (complex) error functions

\begin{aligned}P_{\mathrm{sin}}(x, t) &= A \int_{-L/2}^{L/2} dx' \frac{1}{{2 \sqrt{\pi D t} }} \sin( k_0 x' ) \exp\left( - \frac{(x' - x)^2}{2 \sqrt{D t} }\right) \\ &= \frac{A}{\sqrt{\pi}}\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du \sin( k_0 ( x + 2 u \sqrt{D t} ) ) e^{-u^2} \\ &= \frac{A}{2 i \sqrt{\pi}}\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du \left( e^{ i k_0 ( x + 2 u \sqrt{D t} ) } -e^{ -i k_0 ( x + 2 u \sqrt{D t} ) }\right)e^{-u^2} \\ &= \frac{A}{2 i \sqrt{\pi}}\left( e^{ i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{ -u^2 + 2 i k_0 u \sqrt{D t} } -e^{ -i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{ -u^2 -2 i k_0 u \sqrt{D t} }\right) \\ &= \frac{A}{2 i \sqrt{\pi}}e^{ -k_0^2 D t } \left( e^{ i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{ -(u - i k_0 \sqrt{D t})^2 } -e^{ -i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}}}^{\frac{L/2 - x}{2 \sqrt{Dt}}} du e^{ -(u + i k_0 \sqrt{D t})^2 }\right) \\ &= \frac{A}{2 i \sqrt{\pi}}e^{ -k_0^2 D t } \left( e^{ i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}} - i k_0 \sqrt{D t}}^{\frac{L/2 - x}{2 \sqrt{Dt}} - i k_0 \sqrt{D t}} dv e^{ -v^2 }-e^{ -i k_0 x }\int_{-\frac{L/2 +x}{2 \sqrt{D t}} + i k_0 \sqrt{D t}}^{\frac{L/2 - x}{2 \sqrt{Dt}} + i k_0 \sqrt{D t}} dv e^{ -v^2 }\right) \\ &= \frac{A}{4 i }e^{ -k_0^2 D t } \left( e^{ i k_0 x }\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} - i k_0 \sqrt{D t}, \frac{L/2 - x}{2 \sqrt{Dt}} - i k_0 \sqrt{D t} \right)-e^{ -i k_0 x }\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} + i k_0 \sqrt{D t}, \frac{L/2 - x}{2 \sqrt{Dt}} + i k_0 \sqrt{D t}\right)\right)\end{aligned} \hspace{\stretch{1}}(1.0.23)

This is plotted for A = D = L = 1, k_0 = 8 \pi, and t = 10^{-8} in (Fig5).

Fig5: Verification at t -> 0 that the diffusion result is sine like

The diffusion of this, again for A = D = L = 1, k_0 = 8 \pi, and t \in [10^{-5}, 0.01] is plotted in (Fig6). Again we see that we have the expected sine for small t.

Fig6: Diffusion of the oscillatory term

Putting both the rectangular and the windowed sine portions of the probability distribution together, we have the diffusion result for the entire distribution

\begin{aligned}\boxed{\begin{aligned}P(x, t)&=\frac{1}{{2 L}} \text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} ,\frac{L/2 -x}{2 \sqrt{D t}} \right) \\ &+\frac{A}{4 i }e^{ -k_0^2 D t + i k_0 x }\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} - i k_0 \sqrt{D t}, \frac{L/2 - x}{2 \sqrt{Dt}} - i k_0 \sqrt{D t} \right) \\ &-\frac{A}{4 i }e^{ -k_0^2 D t -i k_0 x }\text{erf}\left( -\frac{L/2 +x}{2 \sqrt{D t}} + i k_0 \sqrt{D t}, \frac{L/2 - x}{2 \sqrt{Dt}} + i k_0 \sqrt{D t}\right)\end{aligned}}\end{aligned} \hspace{\stretch{1}}(1.0.24)

It is certainly ugly looking! We see that the oscillation die off is dependent on the \exp( -k_0^2 D t) term. In time

\begin{aligned}t = \frac{1}{{k_0^2 D}},\end{aligned} \hspace{\stretch{1}}(1.0.25)

that oscillation dies away to 1/e of its initial amplitude. This dispersion is plotted at times t = 10^{-5} and t = 1/(k_0^2 D) for L = D = 1, k_0 = 8 \pi and A = 1/2 in (Fig7).

Fig7: Initial time distribution and dispersion of the oscillatory portion to 1/e of initial amplitude

Similar to the individual plots of P_{\mathrm{rect}}(x, t) and P_{\mathrm{sin}}(x, t) above, we plot the time evolution of the total probability dispersion P(x, t) in (Fig8). We see in the plots above that the rectangular portion of this distribution will also continue to flatten over time after most of the oscillation has also died off.

Fig8: Diffusion of uniform but oscillating probability distribution

An easier solution for the sinusoidal part

After working this problem, talking with classmates about how they solved it (because I was sure I’d done this windowed oscillating distribution the hard way), I now understand what was meant by “ignore boundary effects”. That is, ignore the boundary effects in the sinusoid portion of the distribution. I didn’t see how we could ignore the boundary effects because doing so would make the sine Fourier transform non-convergent. Ignoring pesky ideas like convergence we can “approximate” the Fourier transform of the windowed sine as

\begin{aligned}\tilde{P}_{\mathrm{sin}}(k) &\approx A \int_{-\infty}^\infty \sin (k_0 x) e^{-i k x} dx \\ &= \frac{A }{2 i} \int_{-\infty}^\infty \left(e^{i (k_0 - k) x} -e^{-i (k_0 + k) x} \right)dx \\ &= \frac{A \pi}{i} \left( \delta(k - k_0) - \delta(k + k_0)\right)\end{aligned} \hspace{\stretch{1}}(1.0.26)

Now we can inverse Fourier transform the diffusion result with ease since we’ve got delta functions. That is

\begin{aligned}P_{\mathrm{sin}}(x, t) &\approx \frac{1}{{2 \pi}} \frac{A \pi}{i} \int\left( \delta(k - k_0) - \delta(k + k_0)\right)e^{-D k^2 t} e^{i k x} dk \\ &= e^{-D k_0^2 t} \frac{ e^{i k_0 x} - e^{-i k_0 x}}{2 i} \\ &= e^{-D k_0^2 t} \sin( k_0 x )\end{aligned} \hspace{\stretch{1}}(1.0.27)

Question: Generating function

The Fourier transform of the probability distribution defined above \tilde{P}(k) is called the “generating function” of the distribution. Show that the n-th derivative of this generating function \partial^n P(k)/\partial k^n at the origin k = 0 is related to the n-th moment of the distribution function defined via \left\langle{{x^n}}\right\rangle = \int dx P(x) x^n. We will later see that the “partition function” in statistical mechanics is closely related to this concept of a generating function, and derivatives of this partition function can be related to thermodynamic averages of various observables.

Answer

\begin{aligned}{\left.{{ \frac{\partial^n}{\partial k^n} \tilde{P}(k) }}\right\vert}_{{k = 0}} &= {\left.{{ \frac{\partial^n}{\partial k^n} \left( \int_{-\infty}^\infty dx P(x) \exp\left( -i k x \right)\right)}}\right\vert}_{{k = 0}} \\ &= {\left.{{ \left(\int_{-\infty}^\infty dx P(x) (-i x)^n\exp\left( -i k x \right)\right)}}\right\vert}_{{k = 0}} \\ &= (-i)^n \int_{-\infty}^\infty dx P(x) x^n \\ &= (-i)^n \left\langle{{x^n}}\right\rangle\end{aligned} \hspace{\stretch{1}}(1.0.26)

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 21 (Taught by Prof J.E. Sipe). Scattering theory

Posted by peeterjoot on November 24, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

Scattering theory.

READING: section 19, section 20 of the text [1].

Here’s (\ref{fig:qmTwoL21:qmTwoL21Fig1}) a simple classical picture of a two particle scattering collision

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig1}
\caption{classical collision of particles.}
\end{figure}

We will focus on point particle elastic collisions (no energy lost in the collision). With particles of mass m_1 and m_2 we write for the total and reduced mass respectively

\begin{aligned}M = m_1 + m_2\end{aligned} \hspace{\stretch{1}}(2.1)

\begin{aligned}\frac{1}{{\mu}} = \frac{1}{{m_1}} + \frac{1}{{m_2}},\end{aligned} \hspace{\stretch{1}}(2.2)

so that interaction due to a potential V(\mathbf{r}_1 - \mathbf{r}_2) that depends on the difference in position \mathbf{r} = \mathbf{r}_1 - \mathbf{r} has, in the center of mass frame, the Hamiltonian

\begin{aligned}H = \frac{\mathbf{p}^2}{2 \mu} + V(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(2.3)

In the classical picture we would investigate the scattering radius r_0 associated with the impact parameter \rho as depicted in figure (\ref{fig:qmTwoL21:qmTwoL21Fig2})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig2}
\caption{Classical scattering radius and impact parameter.}
\end{figure}

1D QM scattering. No potential wave packet time evolution.

Now lets move to the QM picture where we assume that we have a particle that can be represented as a wave packet as in figure (\ref{fig:qmTwoL21:qmTwoL21Fig3})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig3}
\caption{Wave packet for a particle wavefunction \Re(\psi(x,0))}
\end{figure}

First without any potential V(x) = 0, lets consider the evolution. Our position and momentum space representations are related by

\begin{aligned}\int {\left\lvert{\psi(x, t)}\right\rvert}^2 dx = 1 = \int {\left\lvert{\psi(p, t)}\right\rvert}^2 dp,\end{aligned} \hspace{\stretch{1}}(2.4)

and by Fourier transform

\begin{aligned}\psi(x, t) = \int \frac{dp}{\sqrt{2 \pi \hbar}} \overline{\psi}(p, t) e^{i p x/\hbar}.\end{aligned} \hspace{\stretch{1}}(2.5)

Schr\”{o}dinger’s equation takes the form

\begin{aligned}i \hbar \frac{\partial {\psi(x,t)}}{\partial {t}} = - \frac{\hbar^2}{2 \mu} \frac{\partial^2 {{\psi(x, t)}}}{\partial {{x}}^2},\end{aligned} \hspace{\stretch{1}}(2.6)

or more simply in momentum space

\begin{aligned}i \hbar \frac{\partial {\overline{\psi}(p,t)}}{\partial {t}} = \frac{p^2}{2 \mu} \frac{\partial^2 {{\overline{\psi}(p, t)}}}{\partial {{x}}^2}.\end{aligned} \hspace{\stretch{1}}(2.7)

Rearranging to integrate we have

\begin{aligned}\frac{\partial {\overline{\psi}}}{\partial {t}} = -\frac{i p^2}{2 \mu \hbar} \overline{\psi},\end{aligned} \hspace{\stretch{1}}(2.8)

and integrating

\begin{aligned}\ln \overline{\psi} = -\frac{i p^2 t}{2 \mu \hbar} + \ln C,\end{aligned} \hspace{\stretch{1}}(2.9)

or

\begin{aligned}\overline{\psi} = C e^{-\frac{i p^2 t}{2 \mu \hbar}} = \overline{\psi}(p, 0) e^{-\frac{i p^2 t}{2 \mu \hbar}}.\end{aligned} \hspace{\stretch{1}}(2.10)

Time evolution in momentum space for the free particle changes only the phase of the wavefunction, the momentum probability density of that particle.

Fourier transforming, we find our position space wavefunction to be

\begin{aligned}\psi(x, t) = \int \frac{dp}{\sqrt{2 \pi \hbar}} \overline{\psi}(p, 0) e^{i p x/\hbar} e^{-i p^2 t/2 \mu \hbar}.\end{aligned} \hspace{\stretch{1}}(2.11)

To clean things up, write

\begin{aligned}p = \hbar k,\end{aligned} \hspace{\stretch{1}}(2.12)

for

\begin{aligned}\psi(x, t) = \int \frac{dk}{\sqrt{2 \pi}} a(k, 0) ) e^{i k x} e^{-i \hbar k^2 t/2 \mu},\end{aligned} \hspace{\stretch{1}}(2.13)

where

\begin{aligned}a(k, 0) = \sqrt{\hbar} \overline{\psi}(p, 0).\end{aligned} \hspace{\stretch{1}}(2.14)

Putting

\begin{aligned}a(k, t) = a(k, 0) e^{ -i \hbar k^2/2 \mu},\end{aligned} \hspace{\stretch{1}}(2.15)

we have

\begin{aligned}\psi(x, t) = \int \frac{dk}{\sqrt{2 \pi}} a(k, t) ) e^{i k x} \end{aligned} \hspace{\stretch{1}}(2.16)

Observe that we have

\begin{aligned}\int dk {\left\lvert{ a(k, t)}\right\rvert}^2 = \int dp {\left\lvert{ \overline{\psi}(p, t)}\right\rvert}^2 = 1.\end{aligned} \hspace{\stretch{1}}(2.17)

A Gaussian wave packet

Suppose that we have, as depicted in figure (\ref{fig:qmTwoL21:qmTwoL21Fig4})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig4}
\caption{Gaussian wave packet.}
\end{figure}

a Gaussian wave packet of the form

\begin{aligned}\psi(x, 0) = \frac{ (\pi \Delta^2)^{1/4}} e^{i k_0 x} e^{- x^2/2 \Delta^2}.\end{aligned} \hspace{\stretch{1}}(2.18)

This is actually a minimum uncertainty packet with

\begin{aligned}\Delta x &= \frac{\Delta}{\sqrt{2}} \\ \Delta p &= \frac{\hbar}{\Delta \sqrt{2}}.\end{aligned} \hspace{\stretch{1}}(2.19)

Taking Fourier transforms we have

\begin{aligned}a(k, 0) &= \left(\frac{\Delta^2}{\pi}\right)^{1/4} e^{-(k - k_0)^2 \Delta^2/2} \\ a(k, t) &= \left(\frac{\Delta^2}{\pi}\right)^{1/4} e^{-(k - k_0)^2 \Delta^2/2} e^{ -i \hbar k^2 t/ 2\mu} \equiv \alpha(k, t)\end{aligned} \hspace{\stretch{1}}(2.21)

For t > 0 our wave packet will start moving and spreading as in figure (\ref{fig:qmTwoL21:qmTwoL21Fig5})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig5}
\caption{moving spreading Gaussian packet.}
\end{figure}

With a potential.

Now “switch on” a potential, still assuming a wave packet representation for the particle. With a positive (repulsive) potential as in figure (\ref{fig:qmTwoL21:qmTwoL21Fig6}), at a time long before the interaction of the wave packet with the potential we can visualize the packet as heading towards the barrier.

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig6}
\caption{QM wave packet prior to interaction with repulsive potential.}
\end{figure}

After some time long after the interaction, classically for this sort of potential where the particle kinetic energy is less than the barrier “height”, we would have total reflection. In the QM case, we’ve seen before that we will have a reflected and a transmitted portion of the wave packet as depicted in figure (\ref{fig:qmTwoL21:qmTwoL21Fig7})
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig7}
\caption{QM wave packet long after interaction with repulsive potential.}
\end{figure}

Even if the particle kinetic energy is greater than the barrier height, as in figure (\ref{fig:qmTwoL21:qmTwoL21Fig8}), we can still have a reflected component.
\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig8}
\caption{Kinetic energy greater than potential energy.}
\end{figure}

This is even true for a negative potential as depicted in figure (\ref{fig:qmTwoL21:qmTwoL21Fig9})!

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig9}
\caption{qmTwoL21Fig9}
\end{figure}

Consider the probability for the particle to be found anywhere long after the interaction, summing over the transmitted and reflected wave functions, we have

\begin{aligned}1 &= \int {\left\lvert{\psi_r + \psi_t}\right\rvert}^2 \\ &= \int {\left\lvert{\psi_r}\right\rvert}^2  + \int {\left\lvert{\psi_t}\right\rvert}^2 + 2 \Re \int \psi_r^{*} \psi_t\end{aligned}

Observe that long after the interaction the cross terms in the probabilities will vanish because they are non-overlapping, leaving just the probably densities for the transmitted and reflected probably densities independently.

We define

\begin{aligned}T &= \int {\left\lvert{\psi_t(x, t)}\right\rvert}^2 dx \\ R &= \int {\left\lvert{\psi_r(x, t)}\right\rvert}^2 dx.\end{aligned} \hspace{\stretch{1}}(2.23)

The objective of most of our scattering problems will be the calculation of these probabilities and the comparisons of their ratios.

Question. Can we have more than one wave packet reflect off. Yes, we could have multiple wave packets for both the reflected and the transmitted portions. For example, if the potential has some internal structure there could be internal reflections before anything emerges on either side and things could get quite messy.

Considering the time independent case temporarily.

We are going to work through something that is going to seem at first to be completely unrelated. We will (eventually) see that this can be applied to this problem, so a bit of patience will be required.

We will be using the time independent Schr\”{o}dinger equation

\begin{aligned}- \frac{\hbar^2}{2 \mu} \psi_k''(x) = V(x) \psi_k(x) = E \psi_k(x),\end{aligned} \hspace{\stretch{1}}(3.25)

where we have added a subscript k to our wave function with the intention (later) of allowing this to vary. For “future use” we define for k > 0

\begin{aligned}E = \frac{\hbar^2 k^2}{2 \mu}.\end{aligned} \hspace{\stretch{1}}(3.26)

Consider a potential as in figure (\ref{fig:qmTwoL21:qmTwoL21Fig10}), where V(x) = 0 for x > x_2 and x < x_1.

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL21Fig10}
\caption{potential zero outside of a specific region.}
\end{figure}

We won't have bound states here (repulsive potential). There will be many possible solutions, but we want to look for a solution that is of the form

\begin{aligned}\psi_k(x) = C e^{i k x}, \qquad x > x_2\end{aligned} \hspace{\stretch{1}}(3.27)

Suppose x = x_3 > x_2, we have

\begin{aligned}\psi_k(x_3) = C e^{i k x_3}\end{aligned} \hspace{\stretch{1}}(3.28)

\begin{aligned}{\left.{{\frac{d\psi_k}{dx}}}\right\vert}_{{x = x_3}} = i k C e^{i k x_3} \equiv \phi_k(x_3)\end{aligned} \hspace{\stretch{1}}(3.29)

\begin{aligned}{\left.{{\frac{d^2\psi_k}{dx^2}}}\right\vert}_{{x = x_3}} = -k^2 C e^{i k x_3} \end{aligned} \hspace{\stretch{1}}(3.30)

Defining

\begin{aligned}\phi_k(x) = \frac{d\psi_k}{dx},\end{aligned} \hspace{\stretch{1}}(3.31)

we write Schr\”{o}dinger’s equation as a pair of coupled first order equations

\begin{aligned}\frac{d\psi_k}{dx} &= \phi_k(x) \\ -\frac{\hbar^2}{2 \mu} \frac{d\phi_k(x)}{dx} = - V(x) \psi_k(x) + \frac{\hbar^2 k^2}{2\mu} \psi_k(x).\end{aligned} \hspace{\stretch{1}}(3.32)

At this x = x_3 specifically, we “know” both \phi_k(x_3) and \psi_k(x_3) and have

\begin{aligned}{\left.{{\frac{d\psi_k}{dx}}}\right\vert}_{{x_3}} &= \phi_k(x) \\ -\frac{\hbar^2}{2 \mu} {\left.{{\frac{d\phi_k(x)}{dx}}}\right\vert}_{{x_3}} = - V(x_3) \psi_k(x_3) + \frac{\hbar^2 k^2}{2\mu} \psi_k(x_3),\end{aligned} \hspace{\stretch{1}}(3.34)

This allows us to find both

\begin{aligned}{dx}}}\right\vert}_{{x_3}} \\ {dx}}}\right\vert}_{{x_3}} \end{aligned} \hspace{\stretch{1}}(3.36)

then proceed to numerically calculate \phi_k(x) and \psi_k(x) at neighboring points x = x_3 + \epsilon. Essentially, this allows us to numerically integrate backwards from x_3 to find the wave function at previous points for any sort of potential.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , | Leave a Comment »

PHY456H1F: Quantum Mechanics II. Lecture 8 (Taught by Prof J.E. Sipe). Time dependent pertubation (cont.)

Posted by peeterjoot on October 8, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

Time dependent pertubation.

We’d gotten as far as calculating

\begin{aligned}c_m^{(1)}(\infty) = \frac{1}{{i \hbar}} \boldsymbol{\mu}_{ms} \cdot \mathbf{E}(\omega_{ms})\end{aligned} \hspace{\stretch{1}}(2.1)

where

\begin{aligned}\mathbf{E}(t) = \int \frac{d\omega}{2 \pi} \mathbf{E}(\omega) e^{-i \omega t},\end{aligned} \hspace{\stretch{1}}(2.2)

and

\begin{aligned}\omega_{ms} = \frac{E_m - E_s}{\hbar}.\end{aligned} \hspace{\stretch{1}}(2.3)

Graphically, these frequencies are illustrated in figure (\ref{fig:qmTwoL8fig0FrequenciesAbsorbtionAndEmission})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{qmTwoL8fig0FrequenciesAbsorbtionAndEmission}
\caption{Positive and negative frequencies.}
\end{figure}

The probability for a transition from m to s is therefore

\begin{aligned}\rho_{m \rightarrow s} = {\left\lvert{ c_m^{(1)}(\infty) }\right\rvert}^2= \frac{1}{{\hbar}}^2 {\left\lvert{\boldsymbol{\mu}_{ms} \cdot \mathbf{E}(\omega_{ms})}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(2.4)

Recall that because the electric field is real we had

\begin{aligned}{\left\lvert{\mathbf{E}(\omega)}\right\rvert}^2 = {\left\lvert{\mathbf{E}(-\omega)}\right\rvert}^2.\end{aligned} \hspace{\stretch{1}}(2.5)

Suppose that we have a wave pulse, where our field magnitude is perhaps of the form

\begin{aligned}E(t) = e^{-t^2/T^2} \cos(\omega_0 t),\end{aligned} \hspace{\stretch{1}}(2.6)

as illustated with \omega = 10, T = 1 in figure (\ref{fig:gaussianWavePacket}).

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{gaussianWavePacket}
\caption{Gaussian wave packet}
\end{figure}

We expect this to have a two lobe Fourier spectrum, with the lobes centered at \omega = \pm 10, and width proportional to 1/T.

For reference, as calculated using Mathematica this Fourier transform is

\begin{aligned}E(\omega) = \frac{e^{-\frac{1}{4} T^2 (\omega_0+\omega )^2}}{2 \sqrt{\frac{2}{T^2}}}+\frac{e^{\omega_0 T^2 \omega -\frac{1}{4} T^2 (\omega_0+\omega )^2}}{2 \sqrt{\frac{2}{T^2}}}\end{aligned} \hspace{\stretch{1}}(2.7)

This is illustrated, again for \omega_0 = 10, and T=1, in figure (\ref{fig:FTgaussianWavePacket})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{FTgaussianWavePacket}
\caption{FTgaussianWavePacket}
\end{figure}

where we see the expected Gaussian result, since the Fourier transform of a Gaussian is a Gaussian.

FIXME: not sure what the point of this was?

Sudden pertubations.

Given our wave equation

\begin{aligned}i \hbar \frac{d{{}}}{dt} {\lvert {\psi(t)} \rangle} = H(t) {\lvert {\psi(t)} \rangle}\end{aligned} \hspace{\stretch{1}}(3.8)

and a sudden pertubation in the Hamiltonian, as illustrated in figure (\ref{fig:suddenStepHamiltonian})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{suddenStepHamiltonian}
\caption{Sudden step Hamiltonian.}
\end{figure}

Consider H_0 and H_F fixed, and decrease \Delta t \rightarrow 0. We can formally integrate 3.8

\begin{aligned}\frac{d{{}}}{dt} {\lvert {\psi(t)} \rangle} = \frac{1}{{i \hbar}} H(t) {\lvert {\psi(t)} \rangle}\end{aligned} \hspace{\stretch{1}}(3.9)

For

\begin{aligned}{\lvert {\psi(t)} \rangle} -{\lvert {\psi(t_0)} \rangle} = \frac{1}{{i \hbar}} \int_{t_0}^t H(t') {\lvert {\psi(t')} \rangle} dt'.\end{aligned} \hspace{\stretch{1}}(3.10)

While this is an exact solution, it is also not terribly useful since we don’t know {\lvert {\psi(t)} \rangle}. However, we can select the small interval \Delta t, and write

\begin{aligned}{\lvert {\psi(\Delta t/2)} \rangle} ={\lvert {\psi(-\Delta t/2)} \rangle}+ \frac{1}{{i \hbar}} \int_{t_0}^t H(t') {\lvert {\psi(t')} \rangle} dt'.\end{aligned} \hspace{\stretch{1}}(3.11)

Note that we could use the integral kernel iteration technique here and substitute {\lvert {\psi(t')} \rangle} = {\lvert {\psi(-\Delta t/2)} \rangle} and then develop this, to generate a power series with (\Delta t/2)^k dependence. However, we note that 3.11 is still an exact relation, and if \Delta t \rightarrow 0, with the integration limits narrowing (provided H(t') is well behaved) we are left with just

\begin{aligned}{\lvert {\psi(\Delta t/2)} \rangle} = {\lvert {\psi(-\Delta t/2)} \rangle}\end{aligned} \hspace{\stretch{1}}(3.12)

Or

\begin{aligned}{\lvert {\psi_{\text{after}}} \rangle} = {\lvert {\psi_{\text{before}}} \rangle},\end{aligned} \hspace{\stretch{1}}(3.13)

provided that we change the Hamiltonian fast enough. On the surface there appears to be no consequences, but there are some very serious ones!

Example: Harmonic oscillator.

Consider our harmonic oscillator Hamiltonian, with

\begin{aligned}H_0 &= \frac{P^2}{2m} + \frac{1}{{2}} m \omega_0^2 X^2 \\ H_F &= \frac{P^2}{2m} + \frac{1}{{2}} m \omega_F^2 X^2\end{aligned} \hspace{\stretch{1}}(3.14)

Here \omega_0 \rightarrow \omega_F continuously, but very quickly. In effect, we have tightened the spring constant. Note that there are cases in linear optics when you can actually do exactly that.

Imagine that {\lvert {\psi_{\text{before}}} \rangle} is in the ground state of the harmonic oscillator as in figure (\ref{fig:suddenHamiltonianPertubationHO})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.2\textheight]{suddenHamiltonianPertubationHO}
\caption{Harmonic oscillator sudden Hamiltonian pertubation.}
\end{figure}

and we suddenly change the Hamilontian with potential V_0 \rightarrow V_F (weakening the “spring”). Professor Sipe gives us a graphical demo of this, by impersonating a constrained wavefunction with his arms, doing weak chicken-flapping of them. Now with the potential weakended, he wiggles and flaps his arms with more freedom and somewhat chaotically. His “wave function” arms are now bouncing around in the new limiting potential (initally over doing it and then bouncing back).

We had in this case the exact relation

\begin{aligned}H_0 {\lvert {\psi_0^{(0)}} \rangle} = \frac{1}{{2}} \hbar \omega_0 {\lvert {\psi_0^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(3.16)

but we also have

\begin{aligned}{\lvert {\psi_{\text{after}}} \rangle} = {\lvert {\psi_{\text{before}}} \rangle} = {\lvert {\psi_0^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(3.17)

and

\begin{aligned}H_F {\lvert {\psi_n^{(f)}} \rangle} = \frac{1}{{2}} \hbar \omega_F \left( n + \frac{1}{{2}} \right) {\lvert {\psi_n^{(f)}} \rangle}\end{aligned} \hspace{\stretch{1}}(3.18)

So

\begin{aligned}{\lvert {\psi_{\text{after}}} \rangle}&={\lvert {\psi_0^{(0)}} \rangle} \\ &=\sum_n {\lvert {\psi_n^{(f)}} \rangle}\underbrace{\left\langle{{\psi_n^{(f)}}} \vert {{\psi_0^{(0)}}}\right\rangle }_{c_n} \\ &=\sum_n c_n {\lvert {\psi_n^{(f)}} \rangle}\end{aligned}

and at later times

\begin{aligned}{\lvert {\psi(t)^{(f)}} \rangle}&={\lvert {\psi_0^{(0)}} \rangle} \\ &=\sum_n c_n e^{i \omega_n^{(f)} t} {\lvert {\psi_n^{(f)}} \rangle},\end{aligned}

whereas

\begin{aligned}{\lvert {\psi(t)^{(o)}} \rangle}&=e^{i \omega_0^{(0)} t} {\lvert {\psi_0^{(0)}} \rangle},\end{aligned}

So, while the wave functions may be exactly the same after such a sudden change in Hamiltonian, the dynamics of the situation change for all future times, since we now have a wavefunction that has a different set of components in the basis for the new Hamiltonian. In particular, the evolution of the wave function is now significantly more complex.

FIXME: plot an example of this.

Adiabatic pertubations.

FIXME: what does Adiabatic mean in this context. The usage in class sounds like it was just “really slow and gradual”, yet this has a definition “Of, relating to, or being a reversible thermodynamic process that occurs without gain or loss of heat and without a change in entropy”.

This is treated in section 17.5.2 of the text [1].

This is the reverse case, and we now vary the Hamiltonian H(t) very slowly.

\begin{aligned}\frac{d{{}}}{dt} {\lvert {\psi(t)} \rangle} = \frac{1}{{i \hbar}} H(t) {\lvert {\psi(t)} \rangle}\end{aligned} \hspace{\stretch{1}}(4.19)

We first consider only non-degenerate states, and at t = 0 write

\begin{aligned}H(0) = H_0,\end{aligned} \hspace{\stretch{1}}(4.20)

and

\begin{aligned}H_0 {\lvert {\psi_s^{(0)}} \rangle} = E_s^{(0)} {\lvert {\psi_s^{(0)}} \rangle}\end{aligned} \hspace{\stretch{1}}(4.21)

Imagine that at each time t we can find the “instantaneous” energy eigenstates

\begin{aligned}H(t) {\lvert {\hat{\psi}_s(t)} \rangle} = E_s(t) {\lvert {\hat{\psi}_s(t)} \rangle} \end{aligned} \hspace{\stretch{1}}(4.22)

These states do not satisfy Schr\”{o}dinger’s equation, but are simply solutions to the eigen problem. Our standard strategy in pertubation is based on analysis of

\begin{aligned}{\lvert {\psi(t)} \rangle} = \sum_n c_n(t) e^{- i \omega_n^{(0)} t} {\lvert {\psi_n^{(0)} } \rangle},\end{aligned} \hspace{\stretch{1}}(4.23)

Here instead

\begin{aligned}{\lvert {\psi(t)} \rangle} = \sum_n b_n(t) {\lvert {\hat{\psi}_n(t)} \rangle},\end{aligned} \hspace{\stretch{1}}(4.24)

we will expand, not using our initial basis, but instead using the instananeous kets. Plugging into Schr\”{o}dinger’s equation we have

\begin{aligned}H(t) {\lvert {\psi(t)} \rangle} &= H(t) \sum_n b_n(t) {\lvert {\hat{\psi}_n(t)} \rangle} \\ &= \sum_n b_n(t) E_n(t) {\lvert {\hat{\psi}_n(t)} \rangle} \end{aligned}

This was complicated before with matrix elements all over the place. Now it is easy, however, the time derivative becomes harder. Doing that we find

\begin{aligned}i \hbar \frac{d{{}}}{dt} {\lvert {\psi(t)} \rangle}&=i \hbar\frac{d{{}}}{dt} \sum_n b_n(t) {\lvert {\hat{\psi}_n(t)} \rangle} \\ &=i \hbar\sum_n \frac{d{{b_n(t)}}}{dt} {\lvert {\hat{\psi}_n(t)} \rangle} + \sum_n b_n(t) \frac{d{{}}}{dt} {\lvert {\hat{\psi}_n(t)} \rangle} \\ &= \sum_n b_n(t) E_n(t) {\lvert {\hat{\psi}_n(t)} \rangle} \end{aligned}

We bra {\langle {\hat{\psi}_m(t)} \rvert} into this

\begin{aligned}i \hbar\sum_n \frac{d{{b_n(t)}}}{dt} \left\langle{{\hat{\psi}_m(t)}} \vert {{\hat{\psi}_n(t)}}\right\rangle+ \sum_n b_n(t) {\langle {\hat{\psi}_m(t)} \rvert}\frac{d{{}}}{dt} {\lvert {\hat{\psi}_n(t)} \rangle} = \sum_n b_n(t) E_n(t) \left\langle{{\hat{\psi}_m(t)}} \vert {{\hat{\psi}_n(t)}}\right\rangle ,\end{aligned} \hspace{\stretch{1}}(4.25)

and find

\begin{aligned}i \hbar\frac{d{{b_m(t)}}}{dt} + \sum_n b_n(t) {\langle {\hat{\psi}_m(t)} \rvert}\frac{d{{}}}{dt} {\lvert {\hat{\psi}_n(t)} \rangle} = b_m(t) E_m(t) \end{aligned} \hspace{\stretch{1}}(4.26)

If the Hamiltonian is changed very very slowly in time, we can imagine that {\lvert {\hat{\psi}_n(t)} \rangle}' is also changing very very slowly, but we are not quite there yet. Let’s first split our sum of bra and ket products

\begin{aligned}\sum_n b_n(t) {\langle {\hat{\psi}_m(t)} \rvert}\frac{d{{}}}{dt} {\lvert {\hat{\psi}_n(t)} \rangle} \end{aligned} \hspace{\stretch{1}}(4.27)

into n \ne m and n = m terms. Looking at just the n = m term

\begin{aligned}{\langle {\hat{\psi}_m(t)} \rvert}\frac{d{{}}}{dt} {\lvert {\hat{\psi}_n(t)} \rangle} \end{aligned} \hspace{\stretch{1}}(4.28)

we note

\begin{aligned}0 &=\frac{d{{}}}{dt} \left\langle{{\hat{\psi}_m(t)}} \vert {{\hat{\psi}_n(t)}}\right\rangle \\ &=\left( \frac{d{{}}}{dt} {\langle {\hat{\psi}_m(t)} \rvert} \right) {\lvert {\hat{\psi}_m(t)} \rangle} \\ + {\langle {\hat{\psi}_m(t)} \rvert} \frac{d{{}}}{dt} {\lvert {\hat{\psi}_m(t)} \rangle} \\ \end{aligned}

Something plus its complex conjugate equals 0

\begin{aligned}a + i b + (a + i b)^{*} = 2 a = 0 \implies a = 0,\end{aligned} \hspace{\stretch{1}}(4.29)

so {\langle {\hat{\psi}_m(t)} \rvert} \frac{d{{}}}{dt} {\lvert {\hat{\psi}_m(t)} \rangle} must be purely imaginary. We write

\begin{aligned}{\langle {\hat{\psi}_m(t)} \rvert} \frac{d{{}}}{dt} {\lvert {\hat{\psi}_m(t)} \rangle} = -i \Gamma_s(t),\end{aligned} \hspace{\stretch{1}}(4.30)

where \Gamma_s is real.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

Posted in Math and Physics Learning. | Tagged: , , , , , , | Leave a Comment »

PHY450H1S. Relativistic Electrodynamics Lecture 15 (Taught by Prof. Erich Poppitz). Fourier solution of Maxwell’s vacuum wave equation in the Coulomb gauge.

Posted by peeterjoot on March 2, 2011

[Click here for a PDF of this post with nicer formatting]

Reading.

Covering chapter 6 material from the text [1].

Covering lecture notes pp. 115-127: reminder on wave equations (115); reminder on Fourier series and integral (115-117); Fourier expansion of the EM potential in Coulomb gauge and equation of motion for the spatial Fourier components (118-119); the general solution of Maxwell’s equations in vacuum (120-121) [Tuesday, Mar. 1]

Review of wave equation results obtained.

Maxwell’s equations in vacuum lead to Coulomb gauge and the Lorentz gauge.

\paragraph{Coulomb gauge}

\begin{aligned}A^0 &= 0 \\ \boldsymbol{\nabla} \cdot \mathbf{A} &= 0 \\ \left( \frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta \right) \mathbf{A} &= 0\end{aligned} \hspace{\stretch{1}}(2.1)

\paragraph{Lorentz gauge}

\begin{aligned}\partial_i A^i &= 0 \\ \left( \frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta \right) A^i &= 0\end{aligned} \hspace{\stretch{1}}(2.4)

Note that \partial_i A^i = 0 is invariant under gauge transformations

\begin{aligned}A^i \rightarrow A^i + \partial^i \chi\end{aligned} \hspace{\stretch{1}}(2.6)

where

\begin{aligned}\partial_i \partial^i \chi = 0,\end{aligned} \hspace{\stretch{1}}(2.7)

So if one uses the Lorentz gauge, this has to be fixed.

However, in both cases we have

\begin{aligned}\left( \frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta \right) f = 0\end{aligned} \hspace{\stretch{1}}(2.8)

where

\begin{aligned}\frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta \end{aligned} \hspace{\stretch{1}}(2.9)

is the wave operator.

Consider

\begin{aligned}\Delta = \frac{\partial^2 {{}}}{\partial {{x}}^2}\end{aligned} \hspace{\stretch{1}}(2.10)

where we are looking for a solution that is independent of y, z. Recall that the general solution for this equation has the form

\begin{aligned}f(t, x) = F_1 \left(t - \frac{x}{c}\right)+F_2 \left(t + \frac{x}{c}\right)\end{aligned} \hspace{\stretch{1}}(2.11)

PICTURE: superposition of two waves with F_1 moving along the x-axis in the positive direction, and F_2 in the negative x direction.

It is notable that the text derives 2.11 in a particularly slick way. It’s still black magic, since one has to know the solution to find it, but very very cool.

Review of Fourier methods.

It is often convienent to impose periodic boundary conditions

\begin{aligned}\mathbf{A}(\mathbf{x} + \mathbf{e}_i L) = \mathbf{A}(\mathbf{x}), i = 1,2,3\end{aligned} \hspace{\stretch{1}}(3.12)

In one dimension

\begin{aligned}f(x + L) = f(x)\end{aligned} \hspace{\stretch{1}}(3.13)

\begin{aligned}f(x) = \sum_{n=-\infty}^\infty e^{i \frac{2 \pi n}{L} x} \tilde{f}_n\end{aligned} \hspace{\stretch{1}}(3.14)

When f(x) is real we also have

\begin{aligned}f^{*}(x) = \sum_{n = -\infty}^\infty e^{-i \frac{2 \pi n}{L} x} (\tilde{f}_n)^{*}\end{aligned} \hspace{\stretch{1}}(3.15)

which implies

\begin{aligned}{\tilde{f}^{*}}_{n} = \tilde{f}_{-n}.\end{aligned} \hspace{\stretch{1}}(3.16)

We introduce a wave number

\begin{aligned}k_n = \frac{2 \pi n}{L},\end{aligned} \hspace{\stretch{1}}(3.17)

allowing a slightly simpler expression of the Fourier decomposition

\begin{aligned}f(x) = \sum_{n=-\infty}^\infty e^{i k_n x} \tilde{f}_{k_n}.\end{aligned} \hspace{\stretch{1}}(3.18)

The inverse transform is obtained by integration over some length L interval

\begin{aligned}\tilde{f}_{k_n} = \frac{1}{{L}} \int_{-L/2}^{L/2} dx e^{-i k_n x} f(x)\end{aligned} \hspace{\stretch{1}}(3.19)

\paragraph{Verify:}

We should be able to recover the Fourier coefficient by utilizing the above

\begin{aligned}\frac{1}{{L}} \int_{-L/2}^{L/2} dx e^{-i k_n x} \sum_{m=-\infty}^\infty e^{i k_m x} \tilde{f}_{k_m} \\ &= \sum_{m = -\infty}^\infty \tilde{f}_{k_m} \delta_{mn} = \tilde{f}_{k_n},\end{aligned}

where we use the easily verifyable fact that

\begin{aligned}\frac{1}{{L}} \int_{-L/2}^{L/2} dx e^{i (k_m - k_n) x} = \begin{array}{l l}0 & \quad \mbox{if latex m \ne n$} \\ 1 & \quad \mbox{if m = n} \\ \end{array}.\end{aligned} \hspace{\stretch{1}}(3.20)$

It is conventional to absorb \tilde{f}_{k_n} = \tilde{f}(k_n) for

\begin{aligned}f(x) &= \frac{1}{{L}} \sum_n \tilde{f}(k_n) e^{i k_n x} \\ \tilde{f}(k_n) &= \int_{-L/2}^{L/2} dx f(x) e^{-i k_n x}\end{aligned} \hspace{\stretch{1}}(3.21)

To take L \rightarrow \infty notice

\begin{aligned}k_n = \frac{2 \pi}{L} n\end{aligned} \hspace{\stretch{1}}(3.23)

when n changes by \Delta n = 1, k_n changes by \Delta k_n = \frac{2 \pi}{L} \Delta n

Using this

\begin{aligned}f(x) = \frac{1}{{2\pi}} \sum_n \left( \frac{2\pi}{L} \Delta n \right) \tilde{f}(k_n) e^{i k_n x}\end{aligned} \hspace{\stretch{1}}(3.24)

With L \rightarrow \infty, and \Delta k_n \rightarrow 0

\begin{aligned}f(x) &= \int_{-\infty}^\infty \frac{dk}{2\pi} \tilde{f}(k) e^{i k x} \\ \tilde{f}(k) &= \int_{-\infty}^\infty dx f(x) e^{-i k x}\end{aligned} \hspace{\stretch{1}}(3.25)

\paragraph{Verify:}

A loose verification of the inversion relationship (the most important bit) is possible by substitution

\begin{aligned}\int \frac{dk}{2\pi} e^{i k x} \tilde{f}(k) &= \iint \frac{dk}{2\pi} e^{i k x} dx' f(x') e^{-i k x'} \\ &= \int dx' f(x') \frac{1}{{2\pi}} \int dk e^{i k (x - x')}\end{aligned}

Now we employ the old physics ploy where we identify

\begin{aligned}\frac{1}{{2\pi}} \int dk e^{i k (x - x')} = \delta(x - x').\end{aligned} \hspace{\stretch{1}}(3.27)

With that we see that we recover the function f(x) above as desired.

In three dimensions

\begin{aligned}\mathbf{A}(\mathbf{x}, t) &= \int \frac{d^3 \mathbf{k}}{(2\pi)^3} \tilde{\mathbf{A}}(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x}} \\ \tilde{\mathbf{A}}(\mathbf{x}, t) &= \int d^3 \mathbf{x} \mathbf{A}(\mathbf{x}, t) e^{-i \mathbf{k} \cdot \mathbf{x}}\end{aligned} \hspace{\stretch{1}}(3.28)

Application to the wave equation

\begin{aligned}0 &= \left( \frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta \right) \mathbf{A}(\mathbf{x}, t) \\ &=\left( \frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta \right) \int \frac{d^3 \mathbf{k}}{(2\pi)^3} \tilde{\mathbf{A}}(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x}} \\ &=\int \frac{d^3 \mathbf{k}}{(2\pi)^3} \left( \frac{1}{{c^2}} \partial_{tt} \tilde{\mathbf{A}}(\mathbf{k}, t) + \mathbf{k}^2 \mathbf{A}(\mathbf{k}, t)\right)e^{i \mathbf{k} \cdot \mathbf{x}} \end{aligned}

Now operate with \int d^3 \mathbf{x} e^{-i \mathbf{p} \cdot \mathbf{x} }

\begin{aligned}0 &=\int d^3 \mathbf{x} e^{-i \mathbf{p} \cdot \mathbf{x} }\int \frac{d^3 \mathbf{k}}{(2\pi)^3} \left( \frac{1}{{c^2}} \partial_{tt} \tilde{\mathbf{A}}(\mathbf{k}, t) + \mathbf{k}^2 \mathbf{A}(\mathbf{k}, t)\right)e^{i \mathbf{k} \cdot \mathbf{x}}  \\ &=\int d^3 \mathbf{k}\delta^3(\mathbf{p} -\mathbf{k}) \left( \frac{1}{{c^2}} \partial_{tt} \tilde{\mathbf{A}}(\mathbf{k}, t) + \mathbf{k}^2 \mathbf{A}(\mathbf{k}, t)\right)\end{aligned}

Since this is true for all \mathbf{p} we have

\begin{aligned}\partial_{tt} \tilde{\mathbf{A}}(\mathbf{p}, t) = -c^2 \mathbf{p}^2 \tilde{\mathbf{A}}(\mathbf{p}, t) \end{aligned} \hspace{\stretch{1}}(3.30)

For every value of momentum we have a harmonic osciallator!

\begin{aligned}\dot{d}{x} = -\omega^2 x\end{aligned} \hspace{\stretch{1}}(3.31)

Fourier modes of EM potential in vacuum obey

\begin{aligned}\partial_{tt} \tilde{\mathbf{A}}(\mathbf{k}, t) = -c^2 \mathbf{k}^2 \tilde{\mathbf{A}}(\mathbf{k}, t)\end{aligned} \hspace{\stretch{1}}(3.32)

Because we are operating in the Coulomb gauge we must also have zero divergence. Let’s see how that translates to our Fourier representation

implies

\begin{aligned}0 &= \boldsymbol{\nabla} \cdot \mathbf{A}(\mathbf{x}, t) \\ &= \int \frac{d^3 \mathbf{k} }{(2 \pi)^3} \boldsymbol{\nabla} \cdot \left( e^{i \mathbf{k} \cdot \mathbf{x}} \cdot \tilde{\mathbf{A}}(\mathbf{k}, t) \right)\end{aligned}

The chain rule for the divergence in this case takes the form

\begin{aligned}\boldsymbol{\nabla} \cdot (\phi \mathbf{B}) = (\boldsymbol{\nabla} \phi) \cdot \mathbf{B} + \phi \boldsymbol{\nabla} \cdot \mathbf{B}.\end{aligned} \hspace{\stretch{1}}(3.33)

But since our vector function \tilde{\mathbf{A}} is not a function of spatial coordinates we have

\begin{aligned}0 = \int \frac{d^3 \mathbf{k} }{(2 \pi)^3} e^{i \mathbf{k} \cdot \mathbf{x}} (i \mathbf{k} \cdot \tilde{\mathbf{A}}(\mathbf{k}, t)).\end{aligned} \hspace{\stretch{1}}(3.34)

This has two immediate consequences. The first is that our momentum space potential is perpendicular to the wave number vector at all points in momentum space, and the second gives us a conjugate relation (substitute \mathbf{k} \rightarrow -\mathbf{k}' after taking conjugates for that one)

\begin{aligned}\mathbf{k} \cdot \tilde{\mathbf{A}}(\mathbf{k}, t) &= 0 \\ \tilde{\mathbf{A}}(-\mathbf{k}, t) &= \tilde{\mathbf{A}}^{*}(\mathbf{k}, t).\end{aligned} \hspace{\stretch{1}}(3.35)

\begin{aligned}\mathbf{A}(\mathbf{x}, t) = \int \frac{d^3 \mathbf{k}}{(2\pi)^3} e^{i \mathbf{k} \cdot \mathbf{x}} \left( \frac{1}{{2}} \tilde{\mathbf{A}}(\mathbf{k}, t) + \frac{1}{{2}} \tilde{\mathbf{A}}^{*}(- \mathbf{k}, t) \right)\end{aligned} \hspace{\stretch{1}}(3.37)

Since out system is essentially a harmonic oscillator at each point in momentum space

\begin{aligned}\partial_{tt} \tilde{\mathbf{A}}(\mathbf{k}, t) &= - \omega_k^2 \tilde{\mathbf{A}}(\mathbf{k}, t) \\ \omega_k^2 &= c^2 \mathbf{k}^2\end{aligned} \hspace{\stretch{1}}(3.38)

our general solution is of the form

\begin{aligned}\tilde{\mathbf{A}}(\mathbf{k}, t) &= e^{i \omega_k t} \mathbf{a}_{+}(\mathbf{k}) +e^{-i \omega_k t} \mathbf{a}_{-}(\mathbf{k}) \\ \tilde{\mathbf{A}}^{*}(\mathbf{k}, t) &= e^{-i \omega_k t} \mathbf{a}_{+}^{*}(\mathbf{k}) +e^{i \omega_k t} \mathbf{a}_{-}^{*}(\mathbf{k})\end{aligned} \hspace{\stretch{1}}(3.40)

\begin{aligned}\mathbf{A}(\mathbf{x}, t) = \int \frac{d^3 \mathbf{k}}{(2 pi)^3} e^{i \mathbf{k} \cdot \mathbf{x}} \frac{1}{{2}} \left( e^{i \omega_k t} (\mathbf{a}_{+}(\mathbf{k}) + \mathbf{a}_{-}^{*}(-\mathbf{k})) +e^{-i \omega_k t} (\mathbf{a}_{-}(\mathbf{k}) + \mathbf{a}_{+}^{*}(-\mathbf{k})) \right)\end{aligned} \hspace{\stretch{1}}(3.42)

Define

\begin{aligned}\boldsymbol{\beta}(\mathbf{k}) \equiv \frac{1}{{2}} (\mathbf{a}_{-}(\mathbf{k}) + \mathbf{a}_{+}^{*}(-\mathbf{k}) )\end{aligned} \hspace{\stretch{1}}(3.43)

so that

\begin{aligned}\boldsymbol{\beta}(-\mathbf{k}) = \frac{1}{{2}} (\mathbf{a}_{+}^{*}(\mathbf{k}) + \mathbf{a}_{-}(-\mathbf{k}))\end{aligned} \hspace{\stretch{1}}(3.44)

Our solution now takes the form

\begin{aligned}\mathbf{A}(\mathbf{x}, t) = \int \frac{d^3\mathbf{k}}{(2 \pi)^3} \left( e^{i (\mathbf{k} \cdot \mathbf{x} + \omega_k t)} \boldsymbol{\beta}^{*}(-\mathbf{k})+e^{i (\mathbf{k} \cdot \mathbf{x} - \omega_k t)} \boldsymbol{\beta}(\mathbf{k})\right)\end{aligned} \hspace{\stretch{1}}(3.45)

\paragraph{Claim:}

This is now manifestly real. To see this, consider the first term with \mathbf{k} = -\mathbf{k}', noting that \int_{-\infty}^\infty dk = \int_{\infty}^\infty -dk' = \int_{-\infty}^\infty dk' with dk = -dk'

\begin{aligned}\int \frac{d^3\mathbf{k}'}{(2 \pi)^3} e^{i (-\mathbf{k}' \cdot \mathbf{x} + \omega_k t)} \boldsymbol{\beta}^{*}(\mathbf{k}')\end{aligned} \hspace{\stretch{1}}(3.46)

Dropping primes this is the conjugate of the second term.

\paragraph{Claim:}

We have \mathbf{k} \cdot \boldsymbol{\beta}(\mathbf{k})  = 0.

Since we have \mathbf{k} \cdot \tilde{\mathbf{A}}(\mathbf{k}, t) = 0, 3.40 implies that we have \mathbf{k} \cdot \mathbf{a}_{\pm}(\mathbf{k}) = 0. With each of these vector integration constants being perpendicular to \mathbf{k} at that point in momentum space, so must be the linear combination of these constants \boldsymbol{\beta}(\mathbf{k}).

References

[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

Posted in Math and Physics Learning. | Tagged: , , , , , , , | Leave a Comment »

My submission for PHY356 (Quantum Mechanics I) Problem Set 3.

Posted by peeterjoot on November 30, 2010

[Click here for a PDF of this post with nicer formatting]

Problem 1.

Statement

A particle of mass m is free to move along the x-direction such that V(X)=0. The state of the system is represented by the wavefunction Eq. (4.74)

\begin{aligned}\psi(x,t) = \frac{1}{{\sqrt{2\pi}}} \int_{-\infty}^\infty dk e^{i k x} e^{- i \omega t} f(k)\end{aligned} \hspace{\stretch{1}}(1.1)

with f(k) given by Eq. (4.59).

\begin{aligned}f(k) &= N e^{-\alpha k^2}\end{aligned} \hspace{\stretch{1}}(1.2)

Note that I’ve inserted a 1/\sqrt{2\pi} factor above that isn’t in the text, because otherwise \psi(x,t) will not be unit normalized (assuming f(k) is normalized in wavenumber space).

\begin{itemize}
\item
(a) What is the group velocity associated with this state?
\item
(b) What is the probability for measuring the particle at position x=x_0>0 at time t=t_0>0?
\item
(c) What is the probability per unit length for measuring the particle at position x=x_0>0 at time t=t_0>0?
\item
(d) Explain the physical meaning of the above results.
\end{itemize}

Solution

(a). group velocity.

To calculate the group velocity we need to know the dependence of \omega on k.

Let’s step back and consider the time evolution action on \psi(x,0). For the free particle case we have

\begin{aligned}H = \frac{\mathbf{p}^2}{2m} = -\frac{\hbar^2}{2m} \partial_{xx}.\end{aligned} \hspace{\stretch{1}}(1.3)

Writing N' = N/\sqrt{2\pi} we have

\begin{aligned}-\frac{i t}{\hbar} H \psi(x,0) &= \frac{i t \hbar }{2m} N' \int_{-\infty}^\infty dk (i k)^2 e^{i k x - \alpha k^2} \\ &= N' \int_{-\infty}^\infty dk \frac{-i t \hbar k^2}{2m} e^{i k x - \alpha k^2}\end{aligned}

Each successive application of -iHt/\hbar will introduce another power of -it\hbar k^2/2 m, so once we sum all the terms of the exponential series U(t) = e^{-iHt/\hbar} we have

\begin{aligned}\psi(x,t) =N' \int_{-\infty}^\infty dk \exp\left( \frac{-i t \hbar k^2}{2m} + i k x - \alpha k^2 \right).\end{aligned} \hspace{\stretch{1}}(1.4)

Comparing with 1.1 we find

\begin{aligned}\omega(k) = \frac{\hbar k^2}{2m}.\end{aligned} \hspace{\stretch{1}}(1.5)

This completes this section of the problem since we are now able to calculate the group velocity

\begin{aligned}v_g = \frac{\partial {\omega(k)}}{\partial {k}} = \frac{\hbar k}{m}.\end{aligned} \hspace{\stretch{1}}(1.6)

(b). What is the probability for measuring the particle at position x=x_0>0 at time t=t_0>0?

In order to evaluate the probability, it looks desirable to evaluate the wave function integral 1.4.
Writing 2 \beta = i/(\alpha + i t \hbar/2m ), the exponent of that integral is

\begin{aligned}-k^2 \left( \alpha + \frac{i t \hbar }{2m} \right) + i k x&=-\left( \alpha + \frac{i t \hbar }{2m} \right) \left( k^2 - \frac{i k x }{\alpha + \frac{i t \hbar }{2m} } \right) \\ &=-\frac{i}{2\beta} \left( (k - x \beta )^2 - x^2 \beta^2 \right)\end{aligned}

The x^2 portion of the exponential

\begin{aligned}\frac{i x^2 \beta^2}{2\beta} = \frac{i x^2 \beta}{2} = - \frac{x^2 }{4 (\alpha + i t \hbar /2m)}\end{aligned}

then comes out of the integral. We can also make a change of variables q = k - x \beta to evaluate the remainder of the Gaussian and are left with

\begin{aligned}\psi(x,t) =N' \sqrt{ \frac{\pi}{\alpha + i t \hbar/2m} } \exp\left( - \frac{x^2 }{4 (\alpha + i t \hbar /2m)} \right).\end{aligned} \hspace{\stretch{1}}(1.7)

Observe that from 1.2 we can compute N = (2 \alpha/\pi)^{1/4}, which could be substituted back into 1.7 if desired.

Our probability density is

\begin{aligned}{\left\lvert{ \psi(x,t) }\right\rvert}^2 &=\frac{1}{{2 \pi}} N^2 {\left\lvert{ \frac{\pi}{\alpha + i t \hbar/2m} }\right\rvert} \exp\left( - \frac{x^2}{4} \left( \frac{1}{{(\alpha + i t \hbar /2m)}} + \frac{1}{{(\alpha - i t \hbar /2m)}} \right) \right) \\ &=\frac{1}{{2 \pi}} \sqrt{\frac{2 \alpha}{\pi} } \frac{\pi}{\sqrt{\alpha^2 + (t \hbar/2m)^2 }} \exp\left( - \frac{x^2}{4} \frac{1}{{\alpha^2 + (t \hbar/2m)^2 }} \left( \alpha - i t \hbar /2m + \alpha + i t \hbar /2m \right)\right) \\ &=\end{aligned}

With a final regrouping of terms, this is

\begin{aligned}{\left\lvert{ \psi(x,t) }\right\rvert}^2 =\sqrt{\frac{ \alpha }{ 2 \pi (\alpha^2 + (t \hbar/2m)^2 }) }\exp\left( - \frac{x^2}{2} \frac{\alpha}{\alpha^2 + (t \hbar/2m)^2 } \right).\end{aligned} \hspace{\stretch{1}}(1.8)

As a sanity check we observe that this integrates to unity for all t as desired. The probability that we find the particle at position x > x_0 is then

\begin{aligned}P_{x>x_0}(t) = \sqrt{\frac{ \alpha }{ 2 \pi (\alpha^2 + (t \hbar/2m)^2 }) }\int_{x=x_0}^\infty dx \exp\left( - \frac{x^2}{2} \frac{\alpha}{\alpha^2 + (t \hbar/2m)^2 } \right)\end{aligned} \hspace{\stretch{1}}(1.9)

The only simplification we can make is to rewrite this in terms of the complementary error function

\begin{aligned}\text{erfc}(x) = \frac{2}{\sqrt{\pi}} \int_x^\infty e^{-t^2} dt.\end{aligned} \hspace{\stretch{1}}(1.10)

Writing

\begin{aligned}\beta(t) = \frac{\alpha}{\alpha^2 + (t \hbar/2m)^2 },\end{aligned} \hspace{\stretch{1}}(1.11)

we have

\begin{aligned}P_{x>x_0}(t_0) = \frac{1}{{2}} \text{erfc} \left( \sqrt{\beta(t_0)/2} x_0 \right)\end{aligned} \hspace{\stretch{1}}(1.12)

Sanity checking this result, we note that since \text{erfc}(0) = 1 the probability for finding the particle in the x>0 range is 1/2 as expected.

(c). What is the probability per unit length for measuring the particle at position x=x_0>0 at time t=t_0>0?

This unit length probability is thus

\begin{aligned}P_{x>x_0+1/2}(t_0) - P_{x>x_0-1/2}(t_0) &=\frac{1}{{2}} \text{erfc}\left( \sqrt{\frac{\beta(t_0)}{2}} \left(x_0+\frac{1}{{2}} \right) \right) -\frac{1}{{2}} \text{erfc}\left( \sqrt{\frac{\beta(t_0)}{2}} \left(x_0-\frac{1}{{2}} \right) \right) \end{aligned} \hspace{\stretch{1}}(1.13)

(d). Explain the physical meaning of the above results.

To get an idea what the group velocity means, observe that we can write our wavefunction 1.1 as

\begin{aligned}\psi(x,t) = \frac{1}{{\sqrt{2\pi}}} \int_{-\infty}^\infty dk e^{i k (x - v_g t)} f(k)\end{aligned} \hspace{\stretch{1}}(1.14)

We see that the phase coefficient of the Gaussian f(k) “moves” at the rate of the group velocity v_g. Also recall that in the text it is noted that the time dependent term 1.11 can be expressed in terms of position and momentum uncertainties (\Delta x)^2, and (\Delta p)^2 = \hbar^2 (\Delta k)^2. That is

\begin{aligned}\frac{1}{{\beta(t)}} = (\Delta x)^2 + \frac{(\Delta p)^2}{m^2} t^2 \equiv (\Delta x(t))^2\end{aligned} \hspace{\stretch{1}}(1.15)

This makes it evident that the probability density flattens and spreads over time with the rate equal to the uncertainty of the group velocity \Delta p/m = \Delta v_g (since v_g = \hbar k/m). It is interesting that something as simple as this phase change results in a physically measurable phenomena. We see that a direct result of this linear with time phase change, we are less able to find the particle localized around it’s original time x = 0 position as more time elapses.

Problem 2.

Statement

A particle with intrinsic angular momentum or spin s=1/2 is prepared in the spin-up with respect to the z-direction state {\lvert {f} \rangle}={\lvert {z+} \rangle}. Determine

\begin{aligned}\left({\langle {f} \rvert} \left( S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1} \right)^2 {\lvert {f} \rangle} \right)^{1/2}\end{aligned} \hspace{\stretch{1}}(2.16)

and

\begin{aligned}\left({\langle {f} \rvert} \left( S_x - {\langle {f} \rvert} S_x {\lvert {f} \rangle} \mathbf{1} \right)^2 {\lvert {f} \rangle} \right)^{1/2}\end{aligned} \hspace{\stretch{1}}(2.17)

and explain what these relations say about the system.

Solution: Uncertainty of S_z with respect to {\lvert {z+} \rangle}

Noting that S_z {\lvert {f} \rangle} = S_z {\lvert {z+} \rangle} = \hbar/2 {\lvert {z+} \rangle} we have

\begin{aligned}{\langle {f} \rvert} S_z {\lvert {f} \rangle} = \frac{\hbar}{2} \end{aligned} \hspace{\stretch{1}}(2.18)

The average outcome for many measurements of the physical quantity associated with the operator S_z when the system has been prepared in the state {\lvert {f} \rangle} = {\lvert {z+} \rangle} is \hbar/2.

\begin{aligned}\Bigl(S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1} \Bigr) {\lvert {f} \rangle}&= \frac{\hbar}{2} {\lvert {f} \rangle} -\frac{\hbar}{2} {\lvert {f} \rangle} = 0\end{aligned} \hspace{\stretch{1}}(2.19)

We could also compute this from the matrix representations, but it is slightly more work.

Operating once more with S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1} on the zero ket vector still gives us zero, so we have zero in the root for 2.16

\begin{aligned}\left({\langle {f} \rvert} \left( S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1} \right)^2 {\lvert {f} \rangle} \right)^{1/2} = 0\end{aligned} \hspace{\stretch{1}}(2.20)

What does 2.20 say about the state of the system? Given many measurements of the physical quantity associated with the operator V = (S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1})^2, where the initial state of the system is always {\lvert {f} \rangle} = {\lvert {z+} \rangle}, then the average of the measurements of the physical quantity associated with V is zero. We can think of the operator V^{1/2} = S_z - {\langle {f} \rvert} S_z {\lvert {f} \rangle} \mathbf{1} as a representation of the observable, “how different is the measured result from the average {\langle {f} \rvert} S_z {\lvert {f} \rangle}”.

So, given a system prepared in state {\lvert {f} \rangle} = {\lvert {z+} \rangle}, and performance of repeated measurements capable of only examining spin-up, we find that the system is never any different than its initial spin-up state. We have no uncertainty that we will measure any difference from spin-up on average, when the system is prepared in the spin-up state.

Solution: Uncertainty of S_x with respect to {\lvert {z+} \rangle}

For this second part of the problem, we note that we can write

\begin{aligned}{\lvert {f} \rangle} = {\lvert {z+} \rangle} = \frac{1}{{\sqrt{2}}} ( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ).\end{aligned} \hspace{\stretch{1}}(2.21)

So the expectation value of S_x with respect to this state is

\begin{aligned}{\langle {f} \rvert} S_x {\lvert {f} \rangle}&=\frac{1}{{2}}( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ) S_x ( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ) \\ &=\hbar ( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ) ( {\lvert {x+} \rangle} - {\lvert {x-} \rangle} ) \\ &=\hbar ( 1 + 0 + 0 -1 ) \\ &= 0\end{aligned}

After repeated preparation of the system in state {\lvert {f} \rangle}, the average measurement of the physical quantity associated with operator S_x is zero. In terms of the eigenstates for that operator {\lvert {x+} \rangle} and {\lvert {x-} \rangle} we have equal probability of measuring either given this particular initial system state.

For the variance calculation, this reduces our problem to the calculation of {\langle {f} \rvert} S_x^2 {\lvert {f} \rangle}, which is

\begin{aligned}{\langle {f} \rvert} S_x^2 {\lvert {f} \rangle} &=\frac{1}{{2}} \left( \frac{\hbar}{2} \right)^2 ( {\lvert {x+} \rangle} + {\lvert {x-} \rangle} ) ( (+1)^2 {\lvert {x+} \rangle} + (-1)^2 {\lvert {x-} \rangle} ) \\ &=\left( \frac{\hbar}{2} \right)^2,\end{aligned}

so for 2.22 we have

\begin{aligned}\left({\langle {f} \rvert} \left( S_x - {\langle {f} \rvert} S_x {\lvert {f} \rangle} \mathbf{1} \right)^2 {\lvert {f} \rangle} \right)^{1/2} = \frac{\hbar}{2}\end{aligned} \hspace{\stretch{1}}(2.22)

The average of the absolute magnitude of the physical quantity associated with operator S_x is found to be \hbar/2 when repeated measurements are performed given a system initially prepared in state {\lvert {f} \rangle} = {\lvert {z+} \rangle}. We saw that the average value for the measurement of that physical quantity itself was zero, showing that we have equal probabilities of measuring either \pm \hbar/2 for this experiment. A measurement that would show the system was in the x-direction spin-up or spin-down states would find that these states are equi-probable.

Grading comments.

I lost one mark on the group velocity response. Instead of 3.23 he wanted

\begin{aligned}v_g = {\left. \frac{\partial {\omega(k)}}{\partial {k}} \right\vert}_{k = k_0}= \frac{\hbar k_0}{m} = 0\end{aligned} \hspace{\stretch{1}}(3.23)

since f(k) peaks at k=0.

I’ll have to go back and think about that a bit, because I’m unsure of the last bits of the reasoning there.

I also lost 0.5 and 0.25 (twice) because I didn’t explicitly state that the probability that the particle is at x_0, a specific single point, is zero. I thought that was obvious and didn’t have to be stated, but it appears expressing this explicitly is what he was looking for.

Curiously, one thing that I didn’t loose marks on was, the wrong answer for the probability per unit length. What he was actually asking for was the following

\begin{aligned}\lim_{\epsilon \rightarrow 0} \frac{1}{{\epsilon}} \int_{x_0 - \epsilon/2}^{x_0 + \epsilon/2} {\left\lvert{ \Psi(x_0, t_0) }\right\rvert}^2 dx = {\left\lvert{\Psi(x_0, t_0)}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(3.24)

That’s a whole lot more sensible seeming quantity to calculate than what I did, but I don’t think that I can be faulted too much since the phrase was never used in the text nor in the lectures.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , , , , , , , | Leave a Comment »

Oct 12, PHY356F lecture notes.

Posted by peeterjoot on October 12, 2010

Today as an experiment I tried taking live notes in latex in class (previously I’d not taken any notes since the lectures followed the text so closely).

[Click here for a PDF of this post with nicer formatting]

Oct 12.

Review. What have we learned?

Chapter 1.

Information about systems comes from vectors and operators. Express the vector {\lvert {\phi} \rangle} describing the system in terms of eigenvectors {\lvert {a_n} \rangle}. n \in 1,2,3,\cdots.

of some operator A. What are the coefficients c_n? Act on both sides by {\langle {a_m} \rvert} to find

\begin{aligned}\left\langle{{a_m}} \vert {{\phi}}\right\rangle &= \sum_n c_n \underbrace{\left\langle{{a_m}} \vert {{a_n}}\right\rangle}_{\text{Kronicker delta}}  \\ &= \sum c_n \delta_{mn} \\ &= c_m\end{aligned}

\begin{aligned}c_m = \left\langle{{a_m}} \vert {{\phi}}\right\rangle\end{aligned}

Analogy

\begin{aligned}\mathbf{v} = \sum_i v_i \mathbf{e}_i \end{aligned}

\begin{aligned}\mathbf{e}_1 \cdot \mathbf{v} = \sum_i v_i \mathbf{e}_1 \cdot \mathbf{e}_i = v_1\end{aligned}

Physical information comes from the probability for obtaining a measurement of the physical entity associated with operator A. The probability of obtaining outcome a_m, an eigenvalue of A, is {\left\lvert{c_n}\right\rvert}^2

Chapter 2.

Deal with operators that have continuous eigenvalues and eigenvectors.

We now express

\begin{aligned}{\lvert {\phi} \rangle} = \int dk f(k) {\lvert {k} \rangle}\end{aligned}

Here the coeffecients f(k) are analogous to c_n.

Now if we project onto k'

\begin{aligned}\left\langle{{k'}} \vert {{\phi}}\right\rangle &= \int dk f(k) \underbrace{\left\langle{{k'}} \vert {{k}}\right\rangle}_{\text{Dirac delta}} \\ &= \int dk f(k) \delta(k' -k) \\ &= f(k') \end{aligned}

Unlike the discrete case, this is not a probability. Probability density for obtaining outcome k' is {\left\lvert{f(k')}\right\rvert}^2.

Example 2.

\begin{aligned}{\lvert {\phi} \rangle} = \int dk f(k) {\lvert {k} \rangle}\end{aligned}

Now if we project x onto both sides

\begin{aligned}\left\langle{{x}} \vert {{\phi}}\right\rangle &= \int dk f(k) \left\langle{{x}} \vert {{k}}\right\rangle \\ \end{aligned}

With \left\langle{{x}} \vert {{k}}\right\rangle = u_k(x)

\begin{aligned}\phi(x) &\equiv \left\langle{{x}} \vert {{\phi}}\right\rangle \\ &= \int dk f(k) u_k(x)  \\ &= \int dk f(k) \frac{1}{{\sqrt{L}}} e^{ikx}\end{aligned}

This is with periodic boundary value conditions for the normalization. The infinite normalization is also possible.

\begin{aligned}\phi(x) &= \frac{1}{{\sqrt{L}}} \int dk f(k) e^{ikx}\end{aligned}

Multiply both sides by e^{-ik'x}/\sqrt{L} and integrate. This is analogous to multiplying {\lvert {\phi} \rangle} = \int f(k) {\lvert {k} \rangle} dk by {\langle {k'} \rvert}. We get

\begin{aligned}\int \phi(x) \frac{1}{{\sqrt{L}}} e^{-ik'x} dx&= \frac{1}{{L}} \iint dk f(k) e^{i(k-k')x} dx \\ &= \int dk f(k) \Bigl( \frac{1}{{L}} \int e^{i(k-k')x} \Bigr) \\ &= \int dk f(k) \delta(k-k') \\ &= f(k')\end{aligned}

\begin{aligned}f(k') &=\int \phi(x) \frac{1}{{\sqrt{L}}} e^{-ik'x} dx\end{aligned}

We can talk about the state vector in terms of its position basis \phi(x) or in the momentum space via Fourier transformation. This is the equivalent thing, but just expressed different. The question of interpretation in terms of probabilities works out the same. Either way we look at the probability density.

The quantity

\begin{aligned}{\lvert {\phi} \rangle} = \int dk f(k) {\lvert {k} \rangle}\end{aligned}

is also called a wave packet state since it involves a superposition of many stats {\lvert {k} \rangle}. Example: See Fig 4.1 (Gaussian wave packet, with {\left\lvert{\phi}\right\rvert}^2 as the height). This wave packet is a snapshot of the wave function amplitude at one specific time instant. The evolution of this wave packet is governed by the Hamiltonian, which brings us to chapter 3.

Chapter 3.

For

\begin{aligned}{\lvert {\phi} \rangle} = \int dk f(k) {\lvert {k} \rangle}\end{aligned}

How do we find {\lvert {\phi(t)} \rangle}, the time evolved state? Here we have the option of choosing which of the pictures (Schr\”{o}dinger, Heisenberg, interaction) we deal with. Since the Heisenberg picture deals with time evolved operators, and the interaction picture with evolving Hamiltonian’s, neither of these is required to answer this question. Consider the Schr\”{o}dinger picture which gives

\begin{aligned}{\lvert {\phi(t)} \rangle} = \int dk f(k) {\lvert {k} \rangle} e^{-i E_k t/\hbar}\end{aligned}

where E_k is the eigenvalue of the Hamiltonian operator H.

STRONG SEEMING HINT: If looking for additional problems and homework, consider in detail the time evolution of the Gaussian wave packet state.

Chapter 4.

For three dimensions with V(x,y,z) = 0

\begin{aligned}H &= \frac{1}{{2m}} \mathbf{p}^2 \\ \mathbf{p} &= \sum_i p_i \mathbf{e}_i \\ \end{aligned}

In the position representation, where

\begin{aligned}p_i &= -i \hbar \frac{d}{dx_i}\end{aligned}

the Sch equation is

\begin{aligned}H u(x,y,z) &= E u(x,y,z) \\ H &= -\frac{\hbar^2}{2m} \boldsymbol{\nabla}^2 \\ = -\frac{\hbar^2}{2m} \left( \frac{\partial^2}{\partial {x}^2}+\frac{\partial^2}{\partial {y}^2}+\frac{\partial^2}{\partial {z}^2}\right) \end{aligned}

Separation of variables assumes it is possible to let

\begin{aligned}u(x,y,z) = X(x) Y(y) Z(z)\end{aligned}

(these capital letters are functions, not operators).

\begin{aligned}-\frac{\hbar^2}{2m} \left( YZ \frac{\partial^2 X}{\partial {x}^2}+ XZ \frac{\partial^2 Y}{\partial {y}^2}+ YZ \frac{\partial^2 Z}{\partial {z}^2}\right)&= E X Y Z\end{aligned}

Dividing as usual by XYZ we have

\begin{aligned}-\frac{\hbar^2}{2m} \left( \frac{1}{{X}} \frac{\partial^2 X}{\partial {x}^2}+ \frac{1}{{Y}} \frac{\partial^2 Y}{\partial {y}^2}+ \frac{1}{{Z}} \frac{\partial^2 Z}{\partial {z}^2} \right)&= E \end{aligned}

The curious thing is that we have these three derivatives, which is supposed to be related to an Energy, which is independent of any x,y,z, so it must be that each of these is separately constant. We can separate these into three individual equations

\begin{aligned}-\frac{\hbar^2}{2m} \frac{1}{{X}} \frac{\partial^2 X}{\partial {x}^2} &= E_1 \\ -\frac{\hbar^2}{2m} \frac{1}{{Y}} \frac{\partial^2 Y}{\partial {x}^2} &= E_2 \\ -\frac{\hbar^2}{2m} \frac{1}{{Z}} \frac{\partial^2 Z}{\partial {x}^2} &= E_3\end{aligned}

or

\begin{aligned}\frac{\partial^2 X}{\partial {x}^2} &= \left( - \frac{2m E_1}{\hbar^2} \right) X  \\ \frac{\partial^2 Y}{\partial {x}^2} &= \left( - \frac{2m E_2}{\hbar^2} \right) Y  \\ \frac{\partial^2 Z}{\partial {x}^2} &= \left( - \frac{2m E_3}{\hbar^2} \right) Z \end{aligned}

We have then

\begin{aligned}X(x) = C_1 e^{i k x}\end{aligned}

with

\begin{aligned}E_1 &= \frac{\hbar^2 k_1^2 }{2m} = \frac{p_1^2}{2m} \\ E_2 &= \frac{\hbar^2 k_2^2 }{2m} = \frac{p_2^2}{2m} \\ E_3 &= \frac{\hbar^2 k_3^2 }{2m} = \frac{p_3^2}{2m} \end{aligned}

We are free to use any sort of normalization procedure we wish (periodic boundary conditions, infinite Dirac, …)

Angular momentum.

HOMEWORK: go through the steps to understand how to formulate \boldsymbol{\nabla}^2 in spherical polar coordinates. This is a lot of work, but is good practice and background for dealing with the Hydrogen atom, something with spherical symmetry that is most naturally analyzed in the spherical polar coordinates.

In spherical coordinates (We won’t go through this here, but it is good practice) with

\begin{aligned}x &= r \sin\theta \cos\phi \\ y &= r \sin\theta \sin\phi \\ z &= r \cos\theta\end{aligned}

we have with u = u(r,\theta, \phi)

\begin{aligned}-\frac{\hbar^2}{2m} \left( \frac{1}{{r}} \partial_{rr} (r u) +  \frac{1}{{r^2 \sin\theta}} \partial_\theta (\sin\theta \partial_\theta u) + \frac{1}{{r^2 \sin^2\theta}} \partial_{\phi\phi} u \right)&= E u\end{aligned}

We see the start of a separation of variables attack with u = R(r) Y(\theta, \phi). We end up with

\begin{aligned}-\frac{\hbar^2}{2m} &\left( \frac{r}{R} (r R')' +  \frac{1}{{Y \sin\theta}} \partial_\theta (\sin\theta \partial_\theta Y) + \frac{1}{{Y \sin^2\theta}} \partial_{\phi\phi} Y \right) \\ \end{aligned}

\begin{aligned}r (r R')' + \left( \frac{2m E}{\hbar^2} r^2 - \lambda \right) R &= 0\end{aligned}

\begin{aligned}\frac{1}{{Y \sin\theta}} \partial_\theta (\sin\theta \partial_\theta Y) + \frac{1}{{Y \sin^2\theta}} \partial_{\phi\phi} Y &= -\lambda\end{aligned}

Application of separation of variables again, with Y = P(\theta) Q(\phi) gives us

\begin{aligned}\frac{1}{{P \sin\theta}} \partial_\theta (\sin\theta \partial_\theta P) + \frac{1}{{Q \sin^2\theta}} \partial_{\phi\phi} Q &= -\lambda \end{aligned}

\begin{aligned}\frac{\sin\theta}{P } \partial_\theta (\sin\theta \partial_\theta P) +\lambda  \sin^2\theta+ \frac{1}{{Q }} \partial_{\phi\phi} Q &= 0\end{aligned}

\begin{aligned}\frac{\sin\theta}{P } \partial_\theta (\sin\theta \partial_\theta P) + \lambda \sin^2\theta - \mu = 0\frac{1}{{Q }} \partial_{\phi\phi} Q &= -\mu\end{aligned}

or

\begin{aligned}\frac{1}{P \sin\theta} \partial_\theta (\sin\theta \partial_\theta P) +\lambda -\frac{\mu}{\sin^2\theta} &= 0\end{aligned} \hspace{\stretch{1}}(1.1)

\begin{aligned}\partial_{\phi\phi} Q &= -\mu Q\end{aligned} \hspace{\stretch{1}}(1.2)

The equation for P can be solved using the Legendre function P_l^m(\cos\theta) where \lambda = l(l+1) and l is an integer

Replacing \mu with m^2, where m is an integer

\begin{aligned}\frac{d^2 Q}{d\phi^2} &= -m^2 Q\end{aligned}

Imposing a periodic boundary condition Q(\phi) = Q(\phi + 2\pi), where (m = 0, \pm 1, \pm 2, \cdots) we have

\begin{aligned}Q &= \frac{1}{{\sqrt{2\pi}}} e^{im\phi}\end{aligned}

There is the overall solution r(r,\theta,\phi) = R(r) Y(\theta, \phi) for a free particle. The functions Y(\theta, \phi) are

\begin{aligned}Y_{lm}(\theta, \phi) &= N \left( \frac{1}{{\sqrt{2\pi}}} e^{im\phi} \right) \underbrace{ P_l^m(\cos\theta) }_{ -l \le m \le l }\end{aligned}

where N is a normalization constant, and m = 0, \pm 1, \pm 2, \cdots. Y_{lm} is an eigenstate of the \mathbf{L}^2 operator and L_z (two for the price of one). There’s no specific reason for the direction z, but it is the direction picked out of convention.

Angular momentum is given by

\begin{aligned}\mathbf{L} = \mathbf{r} \times \mathbf{p}\end{aligned}

where

\begin{aligned}\mathbf{R} = x \hat{\mathbf{x}} + y\hat{\mathbf{y}} + z\hat{\mathbf{z}}\end{aligned}

and

\begin{aligned}\mathbf{p} = p_x \hat{\mathbf{x}} + p_y\hat{\mathbf{y}} + p_z\hat{\mathbf{z}}\end{aligned}

The important thing to remember is that the aim of following all the math is to show that

\begin{aligned}\mathbf{L}^2 Y_{lm} = \hbar^2 l (l+1) Y_{lm}\end{aligned}

and simultaneously

\begin{aligned}\mathbf{L}_z Y_{lm} = \hbar m Y_{lm}\end{aligned}

Part of the solution involves working with \left[{L_z},{L_{+}}\right], and \left[{L_z},{L_{-}}\right], where

\begin{aligned}L_{+} &= L_x + i L_y \\ L_{-} &= L_x - i L_y\end{aligned}

An exercise (not in the book) is to evaluate

\begin{aligned}\left[{L_z},{L_{+}}\right] &= L_z L_x + i L_z L_y - L_x L_z - i L_y L_z \end{aligned} \hspace{\stretch{1}}(1.3)

where

\begin{aligned}\left[{L_x},{L_y}\right]  &= i \hbar L_z \\ \left[{L_y},{L_z}\right]  &= i \hbar L_x \\ \left[{L_z},{L_x}\right]  &= i \hbar L_y\end{aligned} \hspace{\stretch{1}}(1.4)

Substitution back in 1.3 we have

\begin{aligned}\left[{L_z},{L_{+}}\right] &=\left[{L_z},{L_x}\right] + i \left[{L_z},{L_y}\right]  \\ &=i \hbar ( L_y - i L_x ) \\ &=\hbar ( i L_y +  L_x ) \\ &=\hbar L_{+}\end{aligned}

Posted in Math and Physics Learning. | Tagged: , , , , , , , , | Leave a Comment »

Fourier transformation of the Pauli QED wave equation (Take I).

Posted by peeterjoot on May 29, 2010

[Click here for a PDF of this post with nicer formatting].

Motivation.

In [1], Feynman writes the Pauli wave equation for a non-relativistic treatment of a mass in a scalar and vector potential electrodynamic field. That is

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}} = \frac{1}{{2m}} \left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right)^2 \Psi + e \phi \Psi\end{aligned} \hspace{\stretch{1}}(1.1)

Is this amenable to Fourier transform solution like so many other PDEs? Let’s give it a try. It would also be interesting to attempt to apply such a computation to see if it is possible to calculate \left\langle{\mathbf{x}}\right\rangle, and the first two derivatives of this expectation value. I would guess that this would produce the Lorentz force equation.

Prep

Fourier Notation.

Our transform pair will be written

\begin{aligned} \Psi(\mathbf{x}, t) &= \frac{1}{{(\sqrt{2 \pi})^3}} \int \hat{\Psi}(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{k} \end{aligned} \hspace{\stretch{1}}(2.2)

\begin{aligned}\hat{\Psi}(\mathbf{k}, t) &= \frac{1}{{(\sqrt{2 \pi})^3}} \int \Psi(\mathbf{x}, t) e^{-i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{x} \end{aligned} \hspace{\stretch{1}}(2.3)

Interpretation of the squared momentum operator.

Feynman actually wrote

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}} = \frac{1}{{2m}} \left[\sigma \cdot \left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right)\right]\left[\sigma \cdot \left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right)\right] \Psi + e \phi \Psi\end{aligned} \hspace{\stretch{1}}(2.4)

That \sigma \cdot notation I’m not familiar with, and I’ve written this as a plain old vector square. If \mathbf{p} were not an operator, then this would be a scalar, but as written this actually also includes a bivector term proportional to \boldsymbol{\nabla} \wedge \mathbf{A} = I \mathbf{B}. To see that, lets expand this operator explicitly.

\begin{aligned}\left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right) \left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right) \Psi &=\left( \mathbf{p}^2 - \frac{e}{c} ( \mathbf{p} \mathbf{A} + \mathbf{A} \mathbf{p} ) + \frac{e^2}{c^2} \mathbf{A}^2 \right) \Psi \\ &=\left( - \hbar^2 \boldsymbol{\nabla}^2 + \frac{i e \hbar }{c} ( \boldsymbol{\nabla} \mathbf{A} + \mathbf{A} \boldsymbol{\nabla} ) + \frac{e^2}{c^2} \mathbf{A}^2 \right) \Psi \\ \end{aligned}

This anticommutator of the vector potential and the gradient is only a scalar \mathbf{A} has zero divergence. More generally, expanding by chain rules, and using braces to indicate the scope of the differential operations, we have

\begin{aligned}( \boldsymbol{\nabla} \mathbf{A} + \mathbf{A} \boldsymbol{\nabla} ) \Psi&=(\boldsymbol{\nabla} \Psi) \mathbf{A} + \mathbf{A} (\boldsymbol{\nabla} \Psi) + (\boldsymbol{\nabla} \mathbf{A}) \Psi \\ &=2 \mathbf{A} \cdot (\boldsymbol{\nabla} \Psi) + (\boldsymbol{\nabla} \cdot \mathbf{A}) \Psi + I (\boldsymbol{\nabla} \times \mathbf{A}) \Psi \\ &=2 \mathbf{A} \cdot (\boldsymbol{\nabla} \Psi) + (\boldsymbol{\nabla} \cdot \mathbf{A}) \Psi + I \mathbf{B} \Psi - I \mathbf{A} \times (\boldsymbol{\nabla} \Psi) \\ \end{aligned}

where I = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 is the spatial unit trivector, and \mathbf{B} = \boldsymbol{\nabla} \times \mathbf{A}.

This is assuming \Psi should be treated as a complex valued scalar, and not a complex-like geometric object of any sort. Does this bivector term have physical meaning? Should it be discarded or retained? If we assume discarded, then we really want to write the Pauli equation utilizing an explicit scalar selection, as in

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}} = \frac{1}{{2m}} \left\langle{{ \left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right)^2 }}\right\rangle \Psi + e \phi \Psi.\end{aligned} \hspace{\stretch{1}}(2.5)

Assuming that to be the case, our squared momentum operator takes the form

\begin{aligned}\left\langle{{ \left( \mathbf{p} - \frac{e}{c} \mathbf{A} \right)^2 }}\right\rangle \Psi &=\left( - \hbar^2 \boldsymbol{\nabla}^2 + 2 \frac{i e \hbar }{c}\mathbf{A} \cdot \boldsymbol{\nabla} + \frac{i e \hbar }{c}(\boldsymbol{\nabla} \cdot \mathbf{A}) + \frac{e^2}{c^2} \mathbf{A}^2 \right) \Psi.\end{aligned} \hspace{\stretch{1}}(2.6)

The Pauli equation, written out explicitly in terms of the gradient is then

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}} &= \frac{1}{{2m}} \left( - \hbar^2 \boldsymbol{\nabla}^2 + 2 \frac{i e \hbar }{c}\mathbf{A} \cdot \boldsymbol{\nabla} + \frac{e^2}{c^2} \mathbf{A}^2 \right) \Psi+e \left( \frac{i \hbar }{ 2 m c} (\boldsymbol{\nabla} \cdot \mathbf{A}) + \phi \right) \Psi.\end{aligned} \hspace{\stretch{1}}(2.7)

Confirmation.

Instead of guessing what Feynman means when he writes Pauli’s equation, it would be better to just check what Pauli says. In [2] he uses the more straightforward notation

\begin{aligned}\frac{1}{{2m}} \sum_{k=1}^3 \left( p_k - \frac{e}{c}A_k \right)^2\end{aligned} \hspace{\stretch{1}}(2.8)

for the vector potential dependent part of the Hamiltonian operator. This is just the scalar part as was guessed.

Guts

Using the expansion 2.7 of the Pauli equation, and writing V = \phi + i \hbar (\boldsymbol{\nabla} \cdot \mathbf{A})/ (2 m c) for the effective complex potential we have

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}} = \frac{1}{{2m}} \left( - \hbar^2 \boldsymbol{\nabla}^2 + 2 i \hbar \frac{e}{c} \mathbf{A} \cdot \boldsymbol{\nabla} + \frac{e^2}{c^2} \mathbf{A}^2 \right) \Psi + e V \Psi.\end{aligned} \hspace{\stretch{1}}(3.9)

Let’s now apply each of these derivative operations to our assumed Fourier solution \Psi(\mathbf{x}, t) from 2.2. Starting with the Laplacian we have

\begin{aligned}\boldsymbol{\nabla}^2 \Psi(\mathbf{x}, t) &=\frac{1}{{(\sqrt{2 \pi})^3}} \int \hat{\Psi}(\mathbf{k}, t) (i\mathbf{k})^2 e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.10)

For the \mathbf{A} \cdot \boldsymbol{\nabla} operator application we have

\begin{aligned}\mathbf{A} \cdot \boldsymbol{\nabla} \Psi(\mathbf{x}, t) &=\frac{1}{{(\sqrt{2 \pi})^3}} \int \hat{\Psi}(\mathbf{k}, t) (i \mathbf{A} \cdot \mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.11)

Putting both together we have

\begin{aligned}0 &= \frac{1}{{(\sqrt{2 \pi})^3}} \int \left(-i \hbar \frac{\partial {\hat{\Psi}}}{\partial {t}} + \frac{1}{{2m}} \left( \hbar^2 \mathbf{k}^2 - 2 \hbar \frac{e}{c} \mathbf{A} \cdot \mathbf{k} + \frac{e^2}{c^2} \mathbf{A}^2 \right) \hat{\Psi} + e V \hat{\Psi} \right)e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.12)

We can tidy this up slightly by completing the square, yielding

\begin{aligned}0 &= \frac{1}{{(\sqrt{2 \pi})^3}} \int \left(-i \hbar \frac{\partial {\hat{\Psi}}}{\partial {t}} + \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}, t) \right)^2 + e V(\mathbf{x}, t) \right) \hat{\Psi} \right)e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.13)

If this is to be zero for all (\mathbf{x}, t), it seems clear that we need \hat{\Psi}(\mathbf{k}, t) to be the solution of the first order non-linear PDE

\begin{aligned}\frac{\partial {\hat{\Psi}}}{\partial {t}}(\mathbf{k}, t) = \frac{1}{{i \hbar}} \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}, t) \right)^2 + e V(\mathbf{x}, t) \right) \hat{\Psi}(\mathbf{k}, t)\end{aligned} \hspace{\stretch{1}}(3.14)

Somewhere along the way this got a bit confused. Our Fourier transform function is somehow a function of not just wave number, but position, since \hat{\Psi} = \hat{\Psi}(\mathbf{x}, \mathbf{k}, t) by virtue of being a solution to a differential equation involving \mathbf{A}(\mathbf{x},t), and V(\mathbf{x}, t)? Can we pretend to not to have noticed this and continue on anyways? Let’s try the further simplification of the system by imposing a constraint of constant time potentials ({\partial {\mathbf{A}}}/{\partial {t}} = {\partial {V}}/{\partial {t}} = 0). That allows for direct integration of the wave function’s Fourier transform

\begin{aligned}\hat{\Psi}(\mathbf{k}, t) = \hat{\Psi}(\mathbf{k}, 0) \exp\left(\frac{1}{{i \hbar}} \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A} \right)^2 + e V \right) t\right).\end{aligned} \hspace{\stretch{1}}(3.15)

And inverse transforming this

\begin{aligned}\Psi(\mathbf{x}, t) &= \frac{1}{{(\sqrt{2 \pi})^3}} \int \hat{\Psi}(\mathbf{k}, 0) \exp\left(\frac{1}{{i \hbar}} \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}) \right)^2 + e V(\mathbf{x}) \right) t+ i \mathbf{k} \cdot \mathbf{x}\right)d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.16)

By inserting the inverse Fourier transform of \hat{\Psi}(\mathbf{k}, 0), we have the time evolution of the wave function as a convolution integral

\begin{aligned}\Psi(\mathbf{x}, t) &= \frac{1}{{(2 \pi)^3}} \int \Psi(\mathbf{x}', 0) \exp\left(\frac{1}{{i \hbar}} \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}) \right)^2 + e V(\mathbf{x}) \right) t+ i \mathbf{k} \cdot (\mathbf{x} - \mathbf{x}')\right)d^3 \mathbf{k} d^3 \mathbf{x}'.\end{aligned} \hspace{\stretch{1}}(3.17)

Splitting out the convolution kernel, this takes a slightly tidier form

\begin{aligned}\Psi(\mathbf{x}, t) &= \int \hat{U}(\mathbf{x}, \mathbf{x}', t) \Psi(\mathbf{x}', 0) d^3 \mathbf{x}' \\ \hat{U}(\mathbf{x}, \mathbf{x}', t) &= \frac{1}{{(2 \pi)^3}} \int\exp\left(\frac{1}{{i \hbar}} \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}) \right)^2 + e V(\mathbf{x}) \right) t+ i \mathbf{k} \cdot (\mathbf{x} - \mathbf{x}')\right)d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(3.18)

Verification attempt.

If we apply the Pauli equation 1.1 to 3.18 does it produce the correct answer?

For the LHS we have

\begin{aligned}i \hbar \frac{\partial {\Psi}}{\partial {t}} &=\int \left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}) \right)^2 + e V(\mathbf{x}) \right) \hat{U}(\mathbf{x}, \mathbf{x}', t) \Psi(\mathbf{x}', 0) d^3 \mathbf{x}',\end{aligned} \hspace{\stretch{1}}(3.20)

but for the RHS we have

\begin{aligned}&\left( \frac{1}{{2m}} \left\langle{{(\mathbf{p} - \frac{e}{c}\mathbf{A})^2}}\right\rangle + e \phi \right) \Psi=\int d^3 \mathbf{x}'\hat{U}(\mathbf{x}, \mathbf{x}', t) \Psi(\mathbf{x}', 0)  \\ &\qquad\left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}) \right)^2 + e V(\mathbf{x}) +\frac{t}{2m i \hbar} \left( - \hbar^2 \boldsymbol{\nabla}^2 + 2 \frac{i e \hbar }{c} \mathbf{A} \cdot \boldsymbol{\nabla} \right)\left( \frac{1}{{2m}} \left( \hbar \mathbf{k} - \frac{e}{c} \mathbf{A}(\mathbf{x}) \right)^2 + e V(\mathbf{x})\right)\right) \end{aligned} \hspace{\stretch{1}}(3.21)

So if it were not for the spatial dependence of \mathbf{A} and \phi, we would have LHS equal to the RHS. It appears that ignoring the odd \mathbf{x} dependence in the \hat{\Psi} differential equation definitely leads to trouble, and only works for constant potential distributions, a rather boring special case.

References

[1] R.P. Feynman. Quantum Electrodynamics. Addison-Wesley Publishing Company. Reading, Massachusetts, 1961.

[2] W. Pauli. Wave Mechanics. Courier Dover Publications, 2000.

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »

Fourier transform solutions and associated energy and momentum for the homogeneous Maxwell equation. (rework once more)

Posted by peeterjoot on December 29, 2009

[Click here for a PDF of this post with nicer formatting]. Note that this PDF file is formatted in a wide-for-screen layout that is probably not good for printing.

These notes build on and replace those formerly posted in Energy and momentum for assumed Fourier transform solutions to the homogeneous Maxwell equation.

Motivation and notation.

In Electrodynamic field energy for vacuum (reworked) [1], building on Energy and momentum for Complex electric and magnetic field phasors [2], a derivation for the energy and momentum density was derived for an assumed Fourier series solution to the homogeneous Maxwell’s equation. Here we move to the continuous case examining Fourier transform solutions and the associated energy and momentum density.

A complex (phasor) representation is implied, so taking real parts when all is said and done is required of the fields. For the energy momentum tensor the Geometric Algebra form, modified for complex fields, is used

\begin{aligned}T(a) = -\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} a F \Bigr).\end{aligned} \hspace{\stretch{1}}(1.1)

The assumed four vector potential will be written

\begin{aligned}A(\mathbf{x}, t) = A^\mu(\mathbf{x}, t) \gamma_\mu = \frac{1}{{(\sqrt{2 \pi})^3}} \int A(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(1.2)

Subject to the requirement that A is a solution of Maxwell’s equation

\begin{aligned}\nabla (\nabla \wedge A) = 0.\end{aligned} \hspace{\stretch{1}}(1.3)

To avoid latex hell, no special notation will be used for the Fourier coefficients,

\begin{aligned}A(\mathbf{k}, t) = \frac{1}{{(\sqrt{2 \pi})^3}} \int A(\mathbf{x}, t) e^{-i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{x}.\end{aligned} \hspace{\stretch{1}}(1.4)

When convenient and unambiguous, this (\mathbf{k},t) dependence will be implied.

Having picked a time and space representation for the field, it will be natural to express both the four potential and the gradient as scalar plus spatial vector, instead of using the Dirac basis. For the gradient this is

\begin{aligned}\nabla &= \gamma^\mu \partial_\mu = (\partial_0 - \boldsymbol{\nabla}) \gamma_0 = \gamma_0 (\partial_0 + \boldsymbol{\nabla}),\end{aligned} \hspace{\stretch{1}}(1.5)

and for the four potential (or the Fourier transform functions), this is

\begin{aligned}A &= \gamma_\mu A^\mu = (\phi + \mathbf{A}) \gamma_0 = \gamma_0 (\phi - \mathbf{A}).\end{aligned} \hspace{\stretch{1}}(1.6)

Setup

The field bivector F = \nabla \wedge A is required for the energy momentum tensor. This is

\begin{aligned}\nabla \wedge A&= \frac{1}{{2}}\left( \stackrel{ \rightarrow }{\nabla} A - A \stackrel{ \leftarrow }{\nabla} \right) \\ &= \frac{1}{{2}}\left( (\stackrel{ \rightarrow }{\partial}_0 - \stackrel{ \rightarrow }{\boldsymbol{\nabla}}) \gamma_0 \gamma_0 (\phi - \mathbf{A})-(\phi + \mathbf{A}) \gamma_0 \gamma_0 (\stackrel{ \leftarrow }{\partial}_0 + \stackrel{ \leftarrow }{\boldsymbol{\nabla}})\right) \\ &= -\boldsymbol{\nabla} \phi -\partial_0 \mathbf{A} + \frac{1}{{2}}(\stackrel{ \rightarrow }{\boldsymbol{\nabla}} \mathbf{A} - \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}})\end{aligned}

This last term is a spatial curl and the field is then

\begin{aligned}F = -\boldsymbol{\nabla} \phi -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A}\end{aligned} \hspace{\stretch{1}}(2.7)

Applied to the Fourier representation this is

\begin{aligned}F =\frac{1}{{(\sqrt{2 \pi})^3}} \int\left(- \frac{1}{c} \dot{\mathbf{A}}- i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(2.8)

It is only the real parts of this that we are actually interested in, unless physical meaning can be assigned to the complete complex vector field.

Constraints supplied by Maxwell’s equation.

A Fourier transform solution of Maxwell’s vacuum equation \nabla F = 0 has been assumed. Having expressed the Faraday bivector in terms of spatial vector quantities, it is more convenient to do this back substitution into after pre-multiplying Maxwell’s equation by \gamma_0, namely

\begin{aligned}0&= \gamma_0 \nabla F \\ &= (\partial_0 + \boldsymbol{\nabla}) F.\end{aligned} \hspace{\stretch{1}}(3.9)

Applied to the spatially decomposed field as specified in (2.7), this is

\begin{aligned}0&=-\partial_0 \boldsymbol{\nabla} \phi-\partial_{00} \mathbf{A}+ \partial_0 \boldsymbol{\nabla} \wedge \mathbf{A}-\boldsymbol{\nabla}^2 \phi- \boldsymbol{\nabla} \partial_0 \mathbf{A}+ \boldsymbol{\nabla} \cdot (\boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &=- \partial_0 \boldsymbol{\nabla} \phi - \boldsymbol{\nabla}^2 \phi- \partial_{00} \mathbf{A}- \boldsymbol{\nabla} \cdot \partial_0 \mathbf{A}+ \boldsymbol{\nabla}^2 \mathbf{A} - \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A} ) \\ \end{aligned}

All grades of this equation must simultaneously equal zero, and the bivector grades have canceled (assuming commuting space and time partials), leaving two equations of constraint for the system

\begin{aligned}0 &=\boldsymbol{\nabla}^2 \phi + \boldsymbol{\nabla} \cdot \partial_0 \mathbf{A}\end{aligned} \hspace{\stretch{1}}(3.11)

\begin{aligned}0 &=\partial_{00} \mathbf{A} - \boldsymbol{\nabla}^2 \mathbf{A}+ \boldsymbol{\nabla} \partial_0 \phi + \boldsymbol{\nabla} ( \boldsymbol{\nabla} \cdot \mathbf{A} )\end{aligned} \hspace{\stretch{1}}(3.12)

It is immediately evident that a gauge transformation could be immediately helpful to simplify things. In [3] the gauge choice \boldsymbol{\nabla} \cdot \mathbf{A} = 0 is used. From (3.11) this implies that \boldsymbol{\nabla}^2 \phi = 0. Bohm argues that for this current and charge free case this implies \phi = 0, but he also has a periodicity constraint. Without a periodicity constraint it is easy to manufacture non-zero counterexamples. One is a linear function in the space and time coordinates

\begin{aligned}\phi = p x + q y + r z + s t\end{aligned} \hspace{\stretch{1}}(3.13)

This is a valid scalar potential provided that the wave equation for the vector potential is also a solution. We can however, force \phi = 0 by making the transformation A^\mu \rightarrow A^\mu + \partial^\mu \psi, which in non-covariant notation is

\begin{aligned}\phi &\rightarrow \phi + \frac{1}{c} \partial_t \psi \\ \mathbf{A} &\rightarrow \phi - \boldsymbol{\nabla} \psi\end{aligned} \hspace{\stretch{1}}(3.14)

If the transformed field \phi' = \phi + \partial_t \psi/c can be forced to zero, then the complexity of the associated Maxwell equations are reduced. In particular, antidifferentiation of \phi = -(1/c) \partial_t \psi, yields

\begin{aligned}\psi(\mathbf{x},t) = \psi(\mathbf{x}, 0) - c \int_{\tau=0}^t \phi(\mathbf{x}, \tau) d\tau.\end{aligned} \hspace{\stretch{1}}(3.16)

Dropping primes, the transformed Maxwell equations now take the form

\begin{aligned}0 &= \partial_t( \boldsymbol{\nabla} \cdot \mathbf{A} )\end{aligned} \hspace{\stretch{1}}(3.17)

\begin{aligned}0 &=\partial_{00} \mathbf{A} - \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A} ).\end{aligned} \hspace{\stretch{1}}(3.18)

There are two classes of solutions that stand out for these equations. If the vector potential is constant in time \mathbf{A}(\mathbf{x},t) = \mathbf{A}(\mathbf{x}), Maxwell’s equations are reduced to the single equation

\begin{aligned}0&= - \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A} ).\end{aligned} \hspace{\stretch{1}}(3.19)

Observe that a gradient can be factored out of this equation

\begin{aligned}- \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A} )&=\boldsymbol{\nabla} (-\boldsymbol{\nabla} \mathbf{A} + \boldsymbol{\nabla} \cdot \mathbf{A} ) \\ &=-\boldsymbol{\nabla} (\boldsymbol{\nabla} \wedge \mathbf{A}).\end{aligned}

The solutions are then those \mathbf{A}s that satisfy both

\begin{aligned}0 &= \partial_t \mathbf{A} \\ 0 &= \boldsymbol{\nabla} (\boldsymbol{\nabla} \wedge \mathbf{A}).\end{aligned} \hspace{\stretch{1}}(3.20)

In particular any non-time dependent potential \mathbf{A} with constant curl provides a solution to Maxwell’s equations. There may be other solutions to (3.19) too that are more general. Returning to (3.17) a second way to satisfy these equations stands out. Instead of requiring of \mathbf{A} constant curl, constant divergence with respect to the time partial eliminates (3.17). The simplest resulting equations are those for which the divergence is a constant in time and space (such as zero). The solution set are then spanned by the vectors \mathbf{A} for which

\begin{aligned}\text{constant} &= \boldsymbol{\nabla} \cdot \mathbf{A} \end{aligned} \hspace{\stretch{1}}(3.22)

\begin{aligned}0 &= \frac{1}{{c^2}} \partial_{tt} \mathbf{A} - \boldsymbol{\nabla}^2 \mathbf{A}.\end{aligned} \hspace{\stretch{1}}(3.23)

Any \mathbf{A} that both has constant divergence and satisfies the wave equation will via (2.7) then produce a solution to Maxwell’s equation.

Maxwell equation constraints applied to the assumed Fourier solutions.

Let’s consider Maxwell’s equations in all three forms, (3.11), (3.20), and (3.22) and apply these constraints to the assumed Fourier solution.

In all cases the starting point is a pair of Fourier transform relationships, where the Fourier transforms are the functions to be determined

\begin{aligned}\phi(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int \phi(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k} \end{aligned} \hspace{\stretch{1}}(4.24)

\begin{aligned}\mathbf{A}(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int \mathbf{A}(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k} \end{aligned} \hspace{\stretch{1}}(4.25)

Case I. Constant time vector potential. Scalar potential eliminated by gauge transformation.

From (4.24) we require

\begin{aligned}0 = (2 \pi)^{-3/2} \int \partial_t \mathbf{A}(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.26)

So the Fourier transform also cannot have any time dependence, and we have

\begin{aligned}\mathbf{A}(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int \mathbf{A}(\mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k} \end{aligned} \hspace{\stretch{1}}(4.27)

What is the curl of this? Temporarily falling back to coordinates is easiest for this calculation

\begin{aligned}\boldsymbol{\nabla} \wedge \mathbf{A}(\mathbf{k}) e^{i\mathbf{k} \cdot \mathbf{x}}&=\sigma_m \partial_m \wedge \sigma_n A^n(\mathbf{k}) e^{i \mathbf{x} \cdot \mathbf{x}} \\ &=\sigma_m \wedge \sigma_n A^n(\mathbf{k}) i k^m e^{i \mathbf{x} \cdot \mathbf{x}} \\ &=i\mathbf{k} \wedge \mathbf{A}(\mathbf{k}) e^{i \mathbf{x} \cdot \mathbf{x}} \\ \end{aligned}

This gives

\begin{aligned}\boldsymbol{\nabla} \wedge \mathbf{A}(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.28)

We want to equate the divergence of this to zero. Neglecting the integral and constant factor this requires

\begin{aligned}0 &= \boldsymbol{\nabla} \cdot \left( i \mathbf{k} \wedge \mathbf{A} e^{i\mathbf{k} \cdot \mathbf{x}} \right) \\ &= {\left\langle{{ \sigma_m \partial_m i (\mathbf{k} \wedge \mathbf{A}) e^{i\mathbf{k} \cdot \mathbf{x}} }}\right\rangle}_{1} \\ &= -{\left\langle{{ \sigma_m (\mathbf{k} \wedge \mathbf{A}) k^m e^{i\mathbf{k} \cdot \mathbf{x}} }}\right\rangle}_{1} \\ &= -\mathbf{k} \cdot (\mathbf{k} \wedge \mathbf{A}) e^{i\mathbf{k} \cdot \mathbf{x}} \\ \end{aligned}

Requiring that the plane spanned by \mathbf{k} and \mathbf{A}(\mathbf{k}) be perpendicular to \mathbf{k} implies that \mathbf{A} \propto \mathbf{k}. The solution set is then completely described by functions of the form

\begin{aligned}\mathbf{A}(\mathbf{x}, t) &= (2 \pi)^{-3/2} \int \mathbf{k} \psi(\mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k},\end{aligned} \hspace{\stretch{1}}(4.29)

where \psi(\mathbf{k}) is an arbitrary scalar valued function. This is however, an extremely uninteresting solution since the curl is uniformly zero

\begin{aligned}F &= \boldsymbol{\nabla} \wedge \mathbf{A} \\ &= (2 \pi)^{-3/2} \int (i \mathbf{k}) \wedge \mathbf{k} \psi(\mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned}

Since \mathbf{k} \wedge \mathbf{k} = 0, when all is said and done the \phi = 0, \partial_t \mathbf{A} = 0 case appears to have no non-trivial (zero) solutions. Moving on, …

Case II. Constant vector potential divergence. Scalar potential eliminated by gauge transformation.

Next in the order of complexity is consideration of the case (3.22). Here we also have \phi = 0, eliminated by gauge transformation, and are looking for solutions with the constraint

\begin{aligned}\text{constant} &= \boldsymbol{\nabla} \cdot \mathbf{A}(\mathbf{x}, t) \\ &= (2 \pi)^{-3/2} \int i \mathbf{k} \cdot \mathbf{A}(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned}

How can this constraint be enforced? The only obvious way is a requirement for \mathbf{k} \cdot \mathbf{A}(\mathbf{k}, t) to be zero for all (\mathbf{k},t), meaning that our to be determined Fourier transform coefficients are required to be perpendicular to the wave number vector parameters at all times.

The remainder of Maxwell’s equations, (3.23) impose the addition constraint on the Fourier transform \mathbf{A}(\mathbf{k},t)

\begin{aligned}0 &= (2 \pi)^{-3/2} \int \left( \frac{1}{{c^2}} \partial_{tt} \mathbf{A}(\mathbf{k}, t) - i^2 \mathbf{k}^2 \mathbf{A}(\mathbf{k}, t)\right) e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.30)

For zero equality for all \mathbf{x} it appears that we require the Fourier transforms \mathbf{A}(\mathbf{k}) to be harmonic in time

\begin{aligned}\partial_{tt} \mathbf{A}(\mathbf{k}, t) = - c^2 \mathbf{k}^2 \mathbf{A}(\mathbf{k}, t).\end{aligned} \hspace{\stretch{1}}(4.31)

This has the familiar exponential solutions

\begin{aligned}\mathbf{A}(\mathbf{k}, t) = \mathbf{A}_{\pm}(\mathbf{k}) e^{ \pm i c {\left\lvert{\mathbf{k}}\right\rvert} t },\end{aligned} \hspace{\stretch{1}}(4.32)

also subject to a requirement that \mathbf{k} \cdot \mathbf{A}(\mathbf{k}) = 0. Our field, where the \mathbf{A}_{\pm}(\mathbf{k}) are to be determined by initial time conditions, is by (2.7) of the form

\begin{aligned}F(\mathbf{x}, t)= \text{Real} \frac{i}{(\sqrt{2\pi})^3} \int \Bigl( -{\left\lvert{\mathbf{k}}\right\rvert} \mathbf{A}_{+}(\mathbf{k}) + \mathbf{k} \wedge \mathbf{A}_{+}(\mathbf{k}) \Bigr) \exp(i \mathbf{k} \cdot \mathbf{x} + i c {\left\lvert{\mathbf{k}}\right\rvert} t) d^3 \mathbf{k}+ \text{Real} \frac{i}{(\sqrt{2\pi})^3} \int \Bigl( {\left\lvert{\mathbf{k}}\right\rvert} \mathbf{A}_{-}(\mathbf{k}) + \mathbf{k} \wedge \mathbf{A}_{-}(\mathbf{k}) \Bigr) \exp(i \mathbf{k} \cdot \mathbf{x} - i c {\left\lvert{\mathbf{k}}\right\rvert} t) d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.33)

Since 0 = \mathbf{k} \cdot \mathbf{A}_{\pm}(\mathbf{k}), we have \mathbf{k} \wedge \mathbf{A}_{\pm}(\mathbf{k}) = \mathbf{k} \mathbf{A}_{\pm}. This allows for factoring out of {\left\lvert{\mathbf{k}}\right\rvert}. The structure of the solution is not changed by incorporating the i (2\pi)^{-3/2} {\left\lvert{\mathbf{k}}\right\rvert} factors into \mathbf{A}_{\pm}, leaving the field having the general form

\begin{aligned}F(\mathbf{x}, t)= \text{Real} \int ( \hat{\mathbf{k}} - 1 ) \mathbf{A}_{+}(\mathbf{k}) \exp(i \mathbf{k} \cdot \mathbf{x} + i c {\left\lvert{\mathbf{k}}\right\rvert} t) d^3 \mathbf{k}+ \text{Real} \int ( \hat{\mathbf{k}} + 1 ) \mathbf{A}_{-}(\mathbf{k}) \exp(i \mathbf{k} \cdot \mathbf{x} - i c {\left\lvert{\mathbf{k}}\right\rvert} t) d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.34)

The original meaning of \mathbf{A}_{\pm} as Fourier transforms of the vector potential is obscured by the tidy up change to absorb {\left\lvert{\mathbf{k}}\right\rvert}, but the geometry of the solution is clearer this way.

It is also particularly straightforward to confirm that \gamma_0 \nabla F = 0 separately for either half of (4.34).

Case III. Non-zero scalar potential. No gauge transformation.

Now lets work from (3.11). In particular, a divergence operation can be factored from (3.11), for

\begin{aligned}0 = \boldsymbol{\nabla} \cdot (\boldsymbol{\nabla} \phi + \partial_0 \mathbf{A}).\end{aligned} \hspace{\stretch{1}}(4.35)

Right off the top, there is a requirement for

\begin{aligned}\text{constant} = \boldsymbol{\nabla} \phi + \partial_0 \mathbf{A}.\end{aligned} \hspace{\stretch{1}}(4.36)

In terms of the Fourier transforms this is

\begin{aligned}\text{constant} = \frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(i \mathbf{k} \phi(\mathbf{k}, t) + \frac{1}{c} \partial_t \mathbf{A}(\mathbf{k}, t)\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.37)

Are there any ways for this to equal a constant for all \mathbf{x} without requiring that constant to be zero? Assuming no for now, and that this constant must be zero, this implies a coupling between the \phi and \mathbf{A} Fourier transforms of the form

\begin{aligned}\phi(\mathbf{k}, t) = -\frac{1}{{i c \mathbf{k}}} \partial_t \mathbf{A}(\mathbf{k}, t)\end{aligned} \hspace{\stretch{1}}(4.38)

A secondary implication is that \partial_t \mathbf{A}(\mathbf{k}, t) \propto \mathbf{k} or else \phi(\mathbf{k}, t) is not a scalar. We had a transverse solution by requiring via gauge transformation that \phi = 0, and here we have instead the vector potential in the propagation direction.

A secondary confirmation that this is a required coupling between the scalar and vector potential can be had by evaluating the divergence equation of (4.35)

\begin{aligned}0 = \frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(- \mathbf{k}^2 \phi(\mathbf{k}, t) + \frac{i\mathbf{k}}{c} \cdot \partial_t \mathbf{A}(\mathbf{k}, t)\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.39)

Rearranging this also produces (4.38). We want to now substitute this relationship into (3.12).

Starting with just the \partial_0 \phi - \boldsymbol{\nabla} \cdot \mathbf{A} part we have

\begin{aligned}\partial_0 \phi + \boldsymbol{\nabla} \cdot \mathbf{A}&=\frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(\frac{i}{c^2 \mathbf{k}} \partial_{tt} \mathbf{A}(\mathbf{k}, t) + i \mathbf{k} \cdot \mathbf{A}\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.40)

Taking the gradient of this brings down a factor of i\mathbf{k} for

\begin{aligned}\boldsymbol{\nabla} (\partial_0 \phi + \boldsymbol{\nabla} \cdot \mathbf{A})&=-\frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(\frac{1}{c^2} \partial_{tt} \mathbf{A}(\mathbf{k}, t) + \mathbf{k} (\mathbf{k} \cdot \mathbf{A})\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.41)

(3.12) in its entirety is now

\begin{aligned}0 &=\frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(- (i\mathbf{k})^2 \mathbf{A}+ \mathbf{k} (\mathbf{k} \cdot \mathbf{A})\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.42)

This isn’t terribly pleasant looking. Perhaps going the other direction. We could write

\begin{aligned}\phi = \frac{i}{c \mathbf{k}} \frac{\partial {\mathbf{A}}}{\partial {t}} = \frac{i}{c} \frac{\partial {\psi}}{\partial {t}},\end{aligned} \hspace{\stretch{1}}(4.43)

so that

\begin{aligned}\mathbf{A}(\mathbf{k}, t) = \mathbf{k} \psi(\mathbf{k}, t).\end{aligned} \hspace{\stretch{1}}(4.44)

\begin{aligned}0 &=\frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(\frac{1}{{c^2}} \mathbf{k} \psi_{tt}- \boldsymbol{\nabla}^2 \mathbf{k} \psi + \boldsymbol{\nabla} \frac{i}{c^2} \psi_{tt}+\boldsymbol{\nabla}( \boldsymbol{\nabla} \cdot (\mathbf{k} \psi) )\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k} \\ \end{aligned}

Note that the gradients here operate on everything to the right, including and especially the exponential. Each application of the gradient brings down an additional i\mathbf{k} factor, and we have

\begin{aligned}\frac{1}{{(\sqrt{2 \pi})^3}} \int \mathbf{k} \Bigl(\frac{1}{{c^2}} \psi_{tt}- i^2 \mathbf{k}^2 \psi + \frac{i^2}{c^2} \psi_{tt}+i^2 \mathbf{k}^2 \psi \Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned}

This is identically zero, so we see that this second equation provides no additional information. That is somewhat surprising since there is not a whole lot of constraints supplied by the first equation. The function \psi(\mathbf{k}, t) can be anything. Understanding of this curiosity comes from computation of the Faraday bivector itself. From (2.7), that is

\begin{aligned}F = \frac{1}{{(\sqrt{2 \pi})^3}} \int \Bigl(-i \mathbf{k} \frac{i}{c}\psi_t - \frac{1}{c} \mathbf{k} \psi_t + i \mathbf{k} \wedge \mathbf{k} \psi\Bigr)e^{i \mathbf{k} \cdot \mathbf{x} } d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(4.45)

All terms cancel, so we see that a non-zero \phi leads to F = 0, as was the case when considering (4.24) (a case that also resulted in \mathbf{A}(\mathbf{k}) \propto \mathbf{k}).

Can this Fourier representation lead to a non-transverse solution to Maxwell’s equation? If so, it is not obvious how.

The energy momentum tensor

The energy momentum tensor is then

\begin{aligned}T(a) &= -\frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint\left(- \frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)+ i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)- i \mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)a\left(- \frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)- i \mathbf{k} \phi(\mathbf{k}, t)+ i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}'.\end{aligned} \hspace{\stretch{1}}(5.46)

Observing that \gamma_0 commutes with spatial bivectors and anticommutes with spatial vectors, and writing \sigma_\mu = \gamma_\mu \gamma_0, the tensor splits neatly into scalar and spatial vector components

\begin{aligned}T(\gamma_\mu) \cdot \gamma_0 &= \frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint\left\langle{{\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)- i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)+ i \mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)\sigma_\mu\left(\frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)+ i \mathbf{k} \phi(\mathbf{k}, t)+ i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)}}\right\rangle e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}' \\ T(\gamma_\mu) \wedge \gamma_0 &= \frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint{\left\langle{{\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)- i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)+ i \mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)\sigma_\mu\left(\frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)+ i \mathbf{k} \phi(\mathbf{k}, t)+ i \mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)}}\right\rangle}_{1}e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}'.\end{aligned} \hspace{\stretch{1}}(5.47)

In particular for \mu = 0, we have

\begin{aligned}H &\equiv T(\gamma_0) \cdot \gamma_0 = \frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint\left(\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)- i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)\right)\cdot\left(\frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)+ i \mathbf{k} \phi(\mathbf{k}, t)\right)- (\mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)) \cdot (\mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t))\right)e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}' \\ \mathbf{P} &\equiv T(\gamma_\mu) \wedge \gamma_0 = \frac{\epsilon_0}{2 (2 \pi)^3} \text{Real} \iint\left(i\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}(\mathbf{k}',t)- i \mathbf{k}' {{\phi}}^{*}(\mathbf{k}', t)\right) \cdot\left(\mathbf{k} \wedge \mathbf{A}(\mathbf{k}, t)\right)-i\left(\frac{1}{c} \dot{\mathbf{A}}(\mathbf{k}, t)+ i \mathbf{k} \phi(\mathbf{k}, t)\right)\cdot\left(\mathbf{k}' \wedge {\mathbf{A}}^{*}(\mathbf{k}', t)\right)\right)e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x} } d^3 \mathbf{k} d^3 \mathbf{k}'.\end{aligned} \hspace{\stretch{1}}(5.49)

Integrating this over all space and identification of the delta function

\begin{aligned}\delta(\mathbf{k}) \equiv \frac{1}{{(2 \pi)^3}} \int e^{i \mathbf{k} \cdot \mathbf{x}} d^3 \mathbf{x},\end{aligned} \hspace{\stretch{1}}(5.51)

reduces the tensor to a single integral in the continuous angular wave number space of \mathbf{k}.

\begin{aligned}\int T(a) d^3 \mathbf{x} &= -\frac{\epsilon_0}{2} \text{Real} \int\left(- \frac{1}{c} {{\dot{\mathbf{A}}}}^{*}+ i \mathbf{k} {{\phi}}^{*}- i \mathbf{k} \wedge {\mathbf{A}}^{*}\right)a\left(- \frac{1}{c} \dot{\mathbf{A}}- i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.52)

Or,

\begin{aligned}\int T(\gamma_\mu) \gamma_0 d^3 \mathbf{x} =\frac{\epsilon_0}{2} \text{Real} \int{\left\langle{{\left(\frac{1}{c} {{\dot{\mathbf{A}}}}^{*}- i \mathbf{k} {{\phi}}^{*}+ i \mathbf{k} \wedge {\mathbf{A}}^{*}\right)\sigma_\mu\left(\frac{1}{c} \dot{\mathbf{A}}+ i \mathbf{k} \phi+ i \mathbf{k} \wedge \mathbf{A}\right)}}\right\rangle}_{{0,1}}d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.53)

Multiplying out (5.53) yields for \int H

\begin{aligned}\int H d^3 \mathbf{x} &=\frac{\epsilon_0}{2} \int d^3 \mathbf{k} \left(\frac{1}{{c^2}} {\left\lvert{\dot{\mathbf{A}}}\right\rvert}^2 + \mathbf{k}^2 ({\left\lvert{\phi}\right\rvert}^2 + {\left\lvert{\mathbf{A}}\right\rvert}^2 )- {\left\lvert{\mathbf{k} \cdot \mathbf{A}}\right\rvert}^2+ 2 \frac{\mathbf{k}}{c} \cdot \text{Real}( i {{\phi}}^{*} \dot{\mathbf{A}} )\right)\end{aligned} \hspace{\stretch{1}}(5.54)

Recall that the only non-trivial solution we found for the assumed Fourier transform representation of F was for \phi = 0, \mathbf{k} \cdot \mathbf{A}(\mathbf{k}, t) = 0. Thus we have for the energy density integrated over all space, just

\begin{aligned}\int H d^3 \mathbf{x} &=\frac{\epsilon_0}{2} \int d^3 \mathbf{k} \left(\frac{1}{{c^2}} {\left\lvert{\dot{\mathbf{A}}}\right\rvert}^2 + \mathbf{k}^2 {\left\lvert{\mathbf{A}}\right\rvert}^2 \right).\end{aligned} \hspace{\stretch{1}}(5.55)

Observe that we have the structure of a Harmonic oscillator for the energy of the radiation system. What is the canonical momentum for this system? Will it correspond to the Poynting vector, integrated over all space?

Let’s reduce the vector component of (5.53), after first imposing the \phi=0, and \mathbf{k} \cdot \mathbf{A} = 0 conditions used to above for our harmonic oscillator form energy relationship. This is

\begin{aligned}\int \mathbf{P} d^3 \mathbf{x} &=\frac{\epsilon_0}{2 c} \text{Real} \int d^3 \mathbf{k} \left( i {\mathbf{A}}^{*}_t \cdot (\mathbf{k} \wedge \mathbf{A})+ i (\mathbf{k} \wedge {\mathbf{A}}^{*}) \cdot \mathbf{A}_t\right) \\ &=\frac{\epsilon_0}{2 c} \text{Real} \int d^3 \mathbf{k} \left( -i ({\mathbf{A}}^{*}_t \cdot \mathbf{A}) \mathbf{k}+ i \mathbf{k} ({\mathbf{A}}^{*} \cdot \mathbf{A}_t)\right)\end{aligned}

This is just

\begin{aligned}\int \mathbf{P} d^3 \mathbf{x} &=\frac{\epsilon_0}{c} \text{Real} i \int \mathbf{k} ({\mathbf{A}}^{*} \cdot \mathbf{A}_t) d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.56)

Recall that the Fourier transforms for the transverse propagation case had the form \mathbf{A}(\mathbf{k}, t) = \mathbf{A}_{\pm}(\mathbf{k}) e^{\pm i c {\left\lvert{\mathbf{k}}\right\rvert} t}, where the minus generated the advanced wave, and the plus the receding wave. With substitution of the vector potential for the advanced wave into the energy and momentum results of (5.55) and (5.56) respectively, we have

\begin{aligned}\int H d^3 \mathbf{x}   &= \epsilon_0 \int \mathbf{k}^2 {\left\lvert{\mathbf{A}(\mathbf{k})}\right\rvert}^2 d^3 \mathbf{k} \\ \int \mathbf{P} d^3 \mathbf{x} &= \epsilon_0 \int \hat{\mathbf{k}} \mathbf{k}^2 {\left\lvert{\mathbf{A}(\mathbf{k})}\right\rvert}^2 d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.57)

After a somewhat circuitous route, this has the relativistic symmetry that is expected. In particular the for the complete \mu=0 tensor we have after integration over all space

\begin{aligned}\int T(\gamma_0) \gamma_0 d^3 \mathbf{x} = \epsilon_0 \int (1 + \hat{\mathbf{k}}) \mathbf{k}^2 {\left\lvert{\mathbf{A}(\mathbf{k})}\right\rvert}^2 d^3 \mathbf{k}.\end{aligned} \hspace{\stretch{1}}(5.59)

The receding wave solution would give the same result, but directed as 1 - \hat{\mathbf{k}} instead.

Observe that we also have the four divergence conservation statement that is expected

\begin{aligned}\frac{\partial {}}{\partial {t}} \int H d^3 \mathbf{x} + \boldsymbol{\nabla} \cdot \int c \mathbf{P} d^3 \mathbf{x} &= 0.\end{aligned} \hspace{\stretch{1}}(5.60)

This follows trivially since both the derivatives are zero. If the integration region was to be more specific instead of a 0 + 0 = 0 relationship, we’d have the power flux {\partial {H}}/{\partial {t}} equal in magnitude to the momentum change through a bounding surface. For a more general surface the time and spatial dependencies shouldn’t necessarily vanish, but we should still have this radiation energy momentum conservation.

References

[1] Peeter Joot. Electrodynamic field energy for vacuum. [online]. http://sites.google.com/site/peeterjoot/math2009/fourierMaxVac.pdf.

[2] Peeter Joot. {Energy and momentum for Complex electric and magnetic field phasors.} [online]. http://sites.google.com/site/peeterjoot/math2009/complexFieldEnergy.pdf.

[3] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

Posted in Math and Physics Learning. | Tagged: , , , , , , | Leave a Comment »