# Peeter Joot's (OLD) Blog.

• ## Archives

 Adam C Scott on avoiding gdb signal noise… Ken on Scotiabank iTrade RESP …… Alan Ball on Oops. Fixing a drill hole in P… Peeter Joot's B… on Stokes theorem in Geometric… Exploring Stokes The… on Stokes theorem in Geometric…

• 295,659

# Posts Tagged ‘fourier series’

## Fourier coefficient integral for periodic function

Posted by peeterjoot on November 5, 2013

In phy487 we’ve been using the fact that a periodic function

\begin{aligned}V(\mathbf{r}) = V(\mathbf{r} + \mathbf{r}_n),\end{aligned} \hspace{\stretch{1}}(1.1)

where

\begin{aligned}\mathbf{r}_n = a_1 \mathbf{a}_1 + a_2 \mathbf{a}_2 + a_3 \mathbf{a}_3,\end{aligned} \hspace{\stretch{1}}(1.2)

has a Fourier representation

\begin{aligned}V(\mathbf{r}) = \sum_\mathbf{G} V_\mathbf{G} e^{ i \mathbf{G} \cdot \mathbf{r} }.\end{aligned} \hspace{\stretch{1}}(1.3)

Here $\mathbf{G}$ is a vector in reciprocal space, say

\begin{aligned}\mathbf{G}_{rst} = r \mathbf{g}_1 + s \mathbf{g}_2 + t \mathbf{g}_3,\end{aligned} \hspace{\stretch{1}}(1.4)

where

\begin{aligned}\mathbf{g}_i \cdot \mathbf{a}_j = 2 \pi \delta_{ij}.\end{aligned} \hspace{\stretch{1}}(1.5)

Now let’s express the explicit form for the Fourier coefficient $V_\mathbf{G}$ so that we can compute the Fourier representation for some periodic potentials for some numerical experimentation. In particular, let’s think about what it meant to integrate over a unit cell. Suppose we have a parameterization of the points in the unit cell

\begin{aligned}\mathbf{r} = u \mathbf{a}_1 + v \mathbf{a}_2 + w \mathbf{a}_3,\end{aligned} \hspace{\stretch{1}}(1.6)

as sketched in fig. 1.1. Here $u, v, w \in [0, 1]$. We can compute the values of $u, v, w$ for any vector $\mathbf{r}$ in the cell by reciprocal projection

Fig 1.1: Unit cell

\begin{aligned}\mathbf{r} = \frac{1}{{2 \pi}} \left( \left( \mathbf{r} \cdot \mathbf{g}_1 \right) \mathbf{a}_1 + \left( \mathbf{r} \cdot \mathbf{g}_2 \right) \mathbf{a}_2 + \left( \mathbf{r} \cdot \mathbf{g}_3 \right) \mathbf{a}_3 \right),\end{aligned} \hspace{\stretch{1}}(1.7)

or

\begin{aligned}\begin{aligned}u(\mathbf{r}) &= \frac{1}{{2 \pi}} \mathbf{r} \cdot \mathbf{g}_1 \\ v(\mathbf{r}) &= \frac{1}{{2 \pi}} \mathbf{r} \cdot \mathbf{g}_2 \\ w(\mathbf{r}) &= \frac{1}{{2 \pi}} \mathbf{r} \cdot \mathbf{g}_3.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.8)

Let’s suppose that $\mathbf{V}(\mathbf{r})$ is period in the unit cell spanned by $\mathbf{r} = u \mathbf{a}_1 + v \mathbf{a}_2 + w \mathbf{a}_3$ with $u, v, w \in [0, 1]$, and integrate over the unit cube for that parameterization to compute $V_\mathbf{G}$

\begin{aligned}\int_0^1 du\int_0^1 dv\int_0^1 dwV( u \mathbf{a}_1 + v \mathbf{a}_2 + w \mathbf{a}_3 ) e^{-i \mathbf{G}' \cdot \mathbf{r} }=\sum_{r s t}V_{\mathbf{G}_{r s t}}\int_0^1 du\int_0^1 dv\int_0^1 dwe^{-i \mathbf{G}' \cdot \mathbf{r} }e^{i \mathbf{G} \cdot \mathbf{r} }\end{aligned} \hspace{\stretch{1}}(1.9)

Let’s write

\begin{aligned}\begin{aligned}\mathbf{G} &= r \mathbf{g}_1 + s \mathbf{g}_2 + t \mathbf{g}_3 \\ \mathbf{G} &= r' \mathbf{g}_1 + s' \mathbf{g}_2 + t' \mathbf{g}_3,\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.10)

so that

\begin{aligned}e^{-i \mathbf{G}' \cdot \mathbf{r} } e^{i \mathbf{G} \cdot \mathbf{r} }=e^{ 2 \pi i (r - r') u } e^{ 2 \pi i (s - s') u } e^{ 2 \pi i (t - t') u } \end{aligned} \hspace{\stretch{1}}(1.11)

Picking the $u$ integral of this integrand as representative, we have when $r = r'$

\begin{aligned}\int_0^1 du e^{ 2 \pi i (r - r') u } =\int_0^1 du= 1,\end{aligned} \hspace{\stretch{1}}(1.12)

and when $r \ne r'$

\begin{aligned}\int_0^1 du e^{ 2 \pi i (r - r') u } ={\left.{{ \frac{ e^{ 2 \pi i (r - r') u } } { 2 \pi i (r - r') }}}\right\vert}_{{u = 0}}^{{1}}=\frac{1}{{2 \pi i (r - r') }} \left( e^{ 2 \pi i (r - r') } - 1 \right).\end{aligned} \hspace{\stretch{1}}(1.13)

This is just zero since $r - r'$ is an integer, so we have

\begin{aligned}\int_0^1 du e^{ 2 \pi i (r - r') u } = \delta_{r, r'}.\end{aligned} \hspace{\stretch{1}}(1.14)

This gives us

\begin{aligned}\int_0^1 du\int_0^1 dv\int_0^1 dwV( u \mathbf{a}_1 + v \mathbf{a}_2 + w \mathbf{a}_3 ) e^{ -2 \pi i r' u } e^{ -2 \pi i s' v } e^{ -2 \pi i t' w } =\sum_{r s t}V_{\mathbf{G}_{r s t}}\delta_{r s t, r' s' t'}= V_{\mathbf{G}_{r' s' t'}}.\end{aligned} \hspace{\stretch{1}}(1.15)

This is our \textAndIndex{Fourier coefficient}. The \textAndIndex{Fourier series} written out in gory but explicit detail is

\begin{aligned}\boxed{V( u \mathbf{a}_1 + v \mathbf{a}_2 + w \mathbf{a}_3 ) = \sum_{r s t}\left( \int_0^1 du' \int_0^1 dv' \int_0^1 dw' V( u' \mathbf{a}_1 + v' \mathbf{a}_2 + w' \mathbf{a}_3 ) e^{ -2 \pi i (r u' + s v' + t w') } \right)e^{ 2 \pi i (r u + s v + t w) }.}\end{aligned} \hspace{\stretch{1}}(1.16)

Also observe the unfortunate detail that we require integrability of the potential in the unit cell for the Fourier integrals to converge. This prohibits the use of the most obvious potential for numerical experimentation, the inverse radial $V(\mathbf{r}) = -1/\left\lvert {\mathbf{r}} \right\rvert$.

## A Fourier series refresher.

Posted by peeterjoot on May 3, 2012

# Motivation.

I’d used the wrong scaling in a Fourier series over a $[0, 1]$ interval. Here’s a reminder to self what the right way to do this is.

# Guts

Suppose we have a function that is defined in terms of a trigonometric Fourier sum

\begin{aligned}\phi(x) = \sum c_k e^{i \omega k x},\end{aligned} \hspace{\stretch{1}}(2.1)

where the domain of interest is $x \in [a, b]$. Stating the problem this way avoids any issue of existence. We know $c_k$ exists, but just want to find what they are given some other representation of the function.

Multiplying and integrating over our domain we have

\begin{aligned}\begin{aligned}\int_a^b \phi(x) e^{-i \omega m x} dx &= \sum c_k \int_a^b e^{i \omega (k -m) x} dx \\ &= c_m (b - a) + \sum_{k \ne m} \frac{e^{i \omega(k-m) b} - e^{i \omega(k-m)a}}{i \omega (k -m)} .\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.2)

We want all the terms in the sum to be be zero, requiring equality of the exponentials, or

\begin{aligned}e^{i \omega (k -m) (b -a )} = 1,\end{aligned} \hspace{\stretch{1}}(2.3)

or

\begin{aligned}\omega = \frac{2 \pi}{b - a}.\end{aligned} \hspace{\stretch{1}}(2.4)

This fixes our Fourier coefficients

\begin{aligned}c_m = \frac{1}{{b - a}} \int_a^b \phi(x) e^{- 2 \pi i m x/(b - a)} dx.\end{aligned} \hspace{\stretch{1}}(2.5)

Given this, the correct (but unnormalized) Fourier basis for a $[0, 1]$ interval would be the functions $e^{2 \pi i x}$, or the sine and cosine equivalents.

# References

## Curious problem using the variational method to find the ground state energy of the Harmonic oscillator.

Posted by peeterjoot on October 3, 2011

[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Recap. Variational method to find the ground state energy.

Problem 3 of section 24.4 in the text [1] is an interesting one. It asks to use the variational method to find the ground state energy of a one dimensional harmonic oscillator Hamiltonian.

Somewhat unexpectedly, once I take derivatives equate to zero, I find that the variational parameter beta becomes imaginary?

I tried this twice on paper and pencil, both times getting the same thing. This seems like a noteworthy problem, and one worth reflecting on a bit.

# Recap. The variational method.

Given any, not necessarily normalized wavefunction, with a series representation specified using the energy eigenvectors for the space

\begin{aligned}{\lvert {\psi} \rangle} = \sum_m c_{m} {\lvert {\psi_m} \rangle},\end{aligned} \hspace{\stretch{1}}(3.1)

where

\begin{aligned}H {\lvert {\psi_m} \rangle} = E_m {\lvert {\psi_m} \rangle},\end{aligned} \hspace{\stretch{1}}(3.2)

and

\begin{aligned}\left\langle{{\psi_m}} \vert {{\psi_n}}\right\rangle = \delta_{mn}.\end{aligned} \hspace{\stretch{1}}(3.3)

We can perform an energy expectation calculation with respect to this more general state

\begin{aligned}{\langle {\psi} \rvert} H {\lvert {\psi} \rangle} &= \sum_m c_{m}^{*} {\langle {\psi_m} \rvert} H\sum_n c_{n} {\lvert {\psi_n} \rangle} \\ &=\sum_m c_{m}^{*} {\langle {\psi_m} \rvert}\sum_n c_{n} E_n {\lvert {\psi_n} \rangle} \\ &=\sum_{m,n} c_{m}^{*} c_n E_n \left\langle{{\psi_m}} \vert {{\psi_n}}\right\rangle \\ &\sum_{m} {\left\lvert{c_{m}}\right\rvert}^2 E_m \\ &\ge\sum_{m} {\left\lvert{c_{m}}\right\rvert}^2 E_0&=E_0 \left\langle{{\psi}} \vert {{\psi}}\right\rangle\end{aligned}

This allows us to form an estimate of the ground state energy for the system, by using any state vector formed from a superposition of energy eigenstates, by simply calculating

\begin{aligned}E_0 \le \frac{{\langle {\psi} \rvert} H {\lvert {\psi} \rangle}}{ \left\langle{{\psi}} \vert {{\psi}}\right\rangle }.\end{aligned} \hspace{\stretch{1}}(3.4)

One of the examples in the text is to use this to find an approximation of the ground state energy for the Helium atom Hamiltonian

\begin{aligned}H = -\frac{\hbar^2}{2m} \left( \boldsymbol{\nabla}_1^2+\boldsymbol{\nabla}_1^2\right) - 2 e^2 \left( \frac{1}{{r_1}} + \frac{1}{{r_2}} \right) + \frac{e^2}{{\left\lvert{\mathbf{r}_1 - \mathbf{r}_2}\right\rvert}}.\end{aligned} \hspace{\stretch{1}}(3.5)

This calculation is performed using a trial function that was a solution of the interaction free Hamiltonian

\begin{aligned}\phi = \frac{Z^3}{pi a_0^3} e^{-Z (r_1 + r_2)/a_0 }.\end{aligned} \hspace{\stretch{1}}(3.6)

This is despite the fact that this is not a solution to the interaction Hamiltonian. The end result ends up being pretty close to the measured value (although there is a pesky error in the book that appears to require a compensating error somewhere else).

Part of the variational technique used in that problem, is to allow Z to vary, and then once the normalized expectation is computed, set the derivative of that equal to zero to calculate the trial wavefunction as a parameter of Z that has the lowest energy eigenstate for a function of that form. We find considering the Harmonic oscillator that this final variation does not necessarily produce meaningful results.

# The Harmonic oscillator variational problem.

The problem asks for the use of the trial wavefunction

\begin{aligned}\phi = e^{-\beta {\left\lvert{x}\right\rvert}},\end{aligned} \hspace{\stretch{1}}(4.7)

to perform the variational calculation above for the Harmonic oscillator Hamiltonian, which has the one dimensional position space representation

\begin{aligned}H = -\frac{\hbar^2}{2m} \frac{d^2}{dx^2} + \frac{1}{{2}} m \omega^2 x^2.\end{aligned} \hspace{\stretch{1}}(4.8)

We can find the normalization easily

\begin{aligned}\left\langle{{\phi}} \vert {{\phi}}\right\rangle &= \int_{-\infty}^\infty e^{- 2 \beta {\left\lvert{x}\right\rvert}} dx \\ &= 2 \frac{1}{{2 \beta}} \int_{0}^\infty e^{- 2 \beta x} 2 \beta dx \\ &= 2 \frac{1}{{2 \beta}} \int_{0}^\infty e^{- u} du \\ &= \frac{1}{{\beta}}\end{aligned}

After a bit of calculation, using integration by parts, we find for the energy expectation

\begin{aligned}{\langle {\phi} \rvert} H {\lvert {\phi} \rangle} = \int_{-\infty}^\infty dxe^{- \beta {\left\lvert{x}\right\rvert}} \left( -\frac{\hbar^2}{2m} \frac{d^2}{dx^2} + \frac{1}{{2}} m \omega^2 x^2 \right)e^{- \beta {\left\lvert{x}\right\rvert}} = -\frac{\beta \hbar^2}{2m} + \frac{m \omega^2}{4 \beta^3}.\end{aligned} \hspace{\stretch{1}}(4.9)

Note that evaluating this integral requires the origin to be avoided where the derivative of ${\left\lvert{x}\right\rvert}$ becomes undefined. So one has to evaluate this in the limit as $\int_{-\infty}^\infty = \int_{\infty}^{-\epsilon} + \int_\epsilon^\infty$ (which is an easy way to do it anyways since the absolute value can be eliminated by doubling the integral). Because of the curious end result, I also verified my calculation using Mathematica.

So, our ground state energy estimation, parametrized by $\beta$ becomes

\begin{aligned}E[\beta] = -\frac{\beta^2 \hbar^2}{2m} + \frac{m \omega^2}{4 \beta^2}.\end{aligned} \hspace{\stretch{1}}(4.10)

(NOTE: TYPO FIXED ABOVE … THERE WERE TWO MINUS SIGNS, BUT THIS ERROR DIDN’T MAKE IT INTO THE DERIVATIVE BELOW).

Observe that if we set the derivative of this equal to zero to find the “best” beta associated with this trial function

\begin{aligned}0 = \frac{\partial {E}}{\partial {\beta}} = -\frac{\beta \hbar^2}{2m} - \frac{m \omega^2}{2 \beta^3}\end{aligned} \hspace{\stretch{1}}(4.11)

we find that the parameter beta that best minimizes this ground state energy function is complex with value

\begin{aligned}\beta^2 = \pm \frac{i m \omega}{\sqrt{2} \hbar}.\end{aligned} \hspace{\stretch{1}}(4.12)

So, it appears that we can’t minimize 4.10 to find a best ground state energy estimate associated with the trial function 4.7. We do however, know the exact ground state energy $\hbar \omega/2$ for the Harmonic oscillator. Is is possible to show that for all $\beta^2$ we have

\begin{aligned}\frac{\hbar \omega}{2} \le -\frac{ \beta^2 \hbar^2}{2m} + \frac{m \omega^2}{4 \beta^2}\end{aligned} \hspace{\stretch{1}}(4.13)

? This inequality would be expected if we can assume that the trial wavefunction has a Fourier series representation utilizing the actual energy eigenfunctions for the system.

# Some reflection.

I think that it is notable that I don’t believe the trial wave function for this problem lies in the span of the Hilbert space that describes the solutions to the Harmonic oscillator. Another thing of possible interest is the trouble near the origin for this wave function, when operated on by $P^2/2m$. Observe that Mathematica also required some special hand holding to deal with the origin.

I had initially thought that part of the value of this variational method was that we can use it despite not even knowing what the exact solution is (and in the case of the Helium atom, I believe it was stated in class that an exact closed form solution is not even known). This makes me wonder what restrictions must be imposed on the trial solutions to get a meaningful answer from the variational calculation?

Having started with a wavefunction that is probably not representable in the solution space is likely the bigger problem here. We probably need to adjust the treatment to account for that. Suppose we have

\begin{aligned}{\lvert {\phi} \rangle} = \sum_n c_n {\lvert {\psi_n} \rangle} + c_\perp {\lvert {\psi_\perp} \rangle}.\end{aligned} \hspace{\stretch{1}}(5.14)

where ${\lvert {\psi_\perp} \rangle}$ is unknown, and presumed not orthogonal to any of the energy eigenkets. We can still calculate the norm of the trial function

\begin{aligned}\left\langle{{\phi}} \vert {{\phi}}\right\rangle&=\sum_{n,m} \left\langle{{ c_n \psi_n + c_\perp \psi_\perp}} \vert {{ c_m \psi_m + c_\perp \psi_\perp}}\right\rangle \\ &=\sum_n {\left\lvert{c_n}\right\rvert}^2 + c_n^{*} c_\perp \left\langle{{\psi_n}} \vert {{\psi_\perp}}\right\rangle+ c_n c_\perp^{*} \left\langle{{\psi_\perp}} \vert {{\psi_n}}\right\rangle+ {\left\lvert{c_\perp}\right\rvert}^2\left\langle{{\psi_\perp}} \vert {{\psi_\perp}}\right\rangle \\ &=\left\langle{{\psi_\perp}} \vert {{\psi_\perp}}\right\rangle +\sum_n {\left\lvert{c_n}\right\rvert}^2 + 2 \text{Real} \left(c_n^{*} c_\perp \left\langle{{\psi_n}} \vert {{\psi_\perp}}\right\rangle \right).\end{aligned}

Similarly we can calculate the energy expectation for this unnormalized state and find

\begin{aligned}{\langle {\phi} \rvert} H {\lvert {\phi} \rangle}&=\sum_{n,m} {\langle { c_n \psi_n + c_\perp \psi_\perp} \rvert} H {\lvert { c_m \psi_m + c_\perp \psi_\perp} \rangle} \\ &=\sum_n {\left\lvert{c_n}\right\rvert}^2 E_n+ c_n^{*} c_\perp E_n\left\langle{{\psi_n}} \vert {{\psi_\perp}}\right\rangle+ c_n c_\perp^{*} E_n \left\langle{{\psi_\perp}} \vert {{\psi_n}}\right\rangle+ {\left\lvert{c_\perp}\right\rvert}^2{\langle {\psi_\perp} \rvert} H {\lvert {\psi_\perp} \rangle} \end{aligned}

Our normalized energy expectation is therefore the considerably messier

\begin{aligned}\begin{aligned}\frac{{\langle {\phi} \rvert} H {\lvert {\phi} \rangle}}{\left\langle{{\phi}} \vert {{\phi}}\right\rangle}&=\frac{\sum_n {\left\lvert{c_n}\right\rvert}^2 E_n+ c_n^{*} c_\perp E_n\left\langle{{\psi_n}} \vert {{\psi_\perp}}\right\rangle+ c_n c_\perp^{*} E_n \left\langle{{\psi_\perp}} \vert {{\psi_n}}\right\rangle+ {\left\lvert{c_\perp}\right\rvert}^2{\langle {\psi_\perp} \rvert} H {\lvert {\psi_\perp} \rangle} }{\left\langle{{\psi_\perp}} \vert {{\psi_\perp}}\right\rangle +\sum_m {\left\lvert{c_m}\right\rvert}^2 + 2 \text{Real} \left(c_m^{*} c_\perp \left\langle{{\psi_m}} \vert {{\psi_\perp}}\right\rangle \right)} \\ &\ge \frac{\sum_n {\left\lvert{c_n}\right\rvert}^2 E_0+ c_n^{*} c_\perp E_n\left\langle{{\psi_n}} \vert {{\psi_\perp}}\right\rangle+ c_n c_\perp^{*} E_n \left\langle{{\psi_\perp}} \vert {{\psi_n}}\right\rangle+ {\left\lvert{c_\perp}\right\rvert}^2{\langle {\psi_\perp} \rvert} H {\lvert {\psi_\perp} \rangle} }{\left\langle{{\psi_\perp}} \vert {{\psi_\perp}}\right\rangle +\sum_m {\left\lvert{c_m}\right\rvert}^2 + 2 \text{Real} \left(c_m^{*} c_\perp \left\langle{{\psi_m}} \vert {{\psi_\perp}}\right\rangle \right)}\end{aligned}\end{aligned} \hspace{\stretch{1}}(5.15)

With a requirement to include the perpendicular cross terms the norm doesn’t just cancel out, leaving us with a clean estimation of the ground state energy. In order to utilize this variational method, we implicitly have an assumption that the $\left\langle{{\psi_\perp}} \vert {{\psi_\perp}}\right\rangle$ and $\left\langle{{\psi_m}} \vert {{\psi_\perp}}\right\rangle$ terms in the denominator are sufficiently small that they can be neglected.

For this harmonic oscillator problem, it would be interesting to calculate that Fourier remainder explicitly.

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

## PHY450H1S. Relativistic Electrodynamics Lecture 15 (Taught by Prof. Erich Poppitz). Fourier solution of Maxwell’s vacuum wave equation in the Coulomb gauge.

Posted by peeterjoot on March 2, 2011

Covering chapter 6 material from the text [1].

Covering lecture notes pp. 115-127: reminder on wave equations (115); reminder on Fourier series and integral (115-117); Fourier expansion of the EM potential in Coulomb gauge and equation of motion for the spatial Fourier components (118-119); the general solution of Maxwell’s equations in vacuum (120-121) [Tuesday, Mar. 1]

# Review of wave equation results obtained.

Maxwell’s equations in vacuum lead to Coulomb gauge and the Lorentz gauge.

\paragraph{Coulomb gauge}

\begin{aligned}A^0 &= 0 \\ \boldsymbol{\nabla} \cdot \mathbf{A} &= 0 \\ \left( \frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta \right) \mathbf{A} &= 0\end{aligned} \hspace{\stretch{1}}(2.1)

\paragraph{Lorentz gauge}

\begin{aligned}\partial_i A^i &= 0 \\ \left( \frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta \right) A^i &= 0\end{aligned} \hspace{\stretch{1}}(2.4)

Note that $\partial_i A^i = 0$ is invariant under gauge transformations

\begin{aligned}A^i \rightarrow A^i + \partial^i \chi\end{aligned} \hspace{\stretch{1}}(2.6)

where

\begin{aligned}\partial_i \partial^i \chi = 0,\end{aligned} \hspace{\stretch{1}}(2.7)

So if one uses the Lorentz gauge, this has to be fixed.

However, in both cases we have

\begin{aligned}\left( \frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta \right) f = 0\end{aligned} \hspace{\stretch{1}}(2.8)

where

\begin{aligned}\frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta \end{aligned} \hspace{\stretch{1}}(2.9)

is the wave operator.

Consider

\begin{aligned}\Delta = \frac{\partial^2 {{}}}{\partial {{x}}^2}\end{aligned} \hspace{\stretch{1}}(2.10)

where we are looking for a solution that is independent of $y, z$. Recall that the general solution for this equation has the form

\begin{aligned}f(t, x) = F_1 \left(t - \frac{x}{c}\right)+F_2 \left(t + \frac{x}{c}\right)\end{aligned} \hspace{\stretch{1}}(2.11)

PICTURE: superposition of two waves with $F_1$ moving along the x-axis in the positive direction, and $F_2$ in the negative x direction.

It is notable that the text derives 2.11 in a particularly slick way. It’s still black magic, since one has to know the solution to find it, but very very cool.

# Review of Fourier methods.

It is often convienent to impose periodic boundary conditions

\begin{aligned}\mathbf{A}(\mathbf{x} + \mathbf{e}_i L) = \mathbf{A}(\mathbf{x}), i = 1,2,3\end{aligned} \hspace{\stretch{1}}(3.12)

## In one dimension

\begin{aligned}f(x + L) = f(x)\end{aligned} \hspace{\stretch{1}}(3.13)

\begin{aligned}f(x) = \sum_{n=-\infty}^\infty e^{i \frac{2 \pi n}{L} x} \tilde{f}_n\end{aligned} \hspace{\stretch{1}}(3.14)

When $f(x)$ is real we also have

\begin{aligned}f^{*}(x) = \sum_{n = -\infty}^\infty e^{-i \frac{2 \pi n}{L} x} (\tilde{f}_n)^{*}\end{aligned} \hspace{\stretch{1}}(3.15)

which implies

\begin{aligned}{\tilde{f}^{*}}_{n} = \tilde{f}_{-n}.\end{aligned} \hspace{\stretch{1}}(3.16)

We introduce a wave number

\begin{aligned}k_n = \frac{2 \pi n}{L},\end{aligned} \hspace{\stretch{1}}(3.17)

allowing a slightly simpler expression of the Fourier decomposition

\begin{aligned}f(x) = \sum_{n=-\infty}^\infty e^{i k_n x} \tilde{f}_{k_n}.\end{aligned} \hspace{\stretch{1}}(3.18)

The inverse transform is obtained by integration over some length $L$ interval

\begin{aligned}\tilde{f}_{k_n} = \frac{1}{{L}} \int_{-L/2}^{L/2} dx e^{-i k_n x} f(x)\end{aligned} \hspace{\stretch{1}}(3.19)

\paragraph{Verify:}

We should be able to recover the Fourier coefficient by utilizing the above

\begin{aligned}\frac{1}{{L}} \int_{-L/2}^{L/2} dx e^{-i k_n x} \sum_{m=-\infty}^\infty e^{i k_m x} \tilde{f}_{k_m} \\ &= \sum_{m = -\infty}^\infty \tilde{f}_{k_m} \delta_{mn} = \tilde{f}_{k_n},\end{aligned}

where we use the easily verifyable fact that

\begin{aligned}\frac{1}{{L}} \int_{-L/2}^{L/2} dx e^{i (k_m - k_n) x} = \begin{array}{l l}0 & \quad \mbox{iflatex m \ne n} \\ 1 & \quad \mbox{if $m = n$} \\ \end{array}.\end{aligned} \hspace{\stretch{1}}(3.20)

It is conventional to absorb $\tilde{f}_{k_n} = \tilde{f}(k_n)$ for

\begin{aligned}f(x) &= \frac{1}{{L}} \sum_n \tilde{f}(k_n) e^{i k_n x} \\ \tilde{f}(k_n) &= \int_{-L/2}^{L/2} dx f(x) e^{-i k_n x}\end{aligned} \hspace{\stretch{1}}(3.21)

To take $L \rightarrow \infty$ notice

\begin{aligned}k_n = \frac{2 \pi}{L} n\end{aligned} \hspace{\stretch{1}}(3.23)

when $n$ changes by $\Delta n = 1$, $k_n$ changes by $\Delta k_n = \frac{2 \pi}{L} \Delta n$

Using this

\begin{aligned}f(x) = \frac{1}{{2\pi}} \sum_n \left( \frac{2\pi}{L} \Delta n \right) \tilde{f}(k_n) e^{i k_n x}\end{aligned} \hspace{\stretch{1}}(3.24)

With $L \rightarrow \infty$, and $\Delta k_n \rightarrow 0$

\begin{aligned}f(x) &= \int_{-\infty}^\infty \frac{dk}{2\pi} \tilde{f}(k) e^{i k x} \\ \tilde{f}(k) &= \int_{-\infty}^\infty dx f(x) e^{-i k x}\end{aligned} \hspace{\stretch{1}}(3.25)

\paragraph{Verify:}

A loose verification of the inversion relationship (the most important bit) is possible by substitution

\begin{aligned}\int \frac{dk}{2\pi} e^{i k x} \tilde{f}(k) &= \iint \frac{dk}{2\pi} e^{i k x} dx' f(x') e^{-i k x'} \\ &= \int dx' f(x') \frac{1}{{2\pi}} \int dk e^{i k (x - x')}\end{aligned}

Now we employ the old physics ploy where we identify

\begin{aligned}\frac{1}{{2\pi}} \int dk e^{i k (x - x')} = \delta(x - x').\end{aligned} \hspace{\stretch{1}}(3.27)

With that we see that we recover the function $f(x)$ above as desired.

## In three dimensions

\begin{aligned}\mathbf{A}(\mathbf{x}, t) &= \int \frac{d^3 \mathbf{k}}{(2\pi)^3} \tilde{\mathbf{A}}(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x}} \\ \tilde{\mathbf{A}}(\mathbf{x}, t) &= \int d^3 \mathbf{x} \mathbf{A}(\mathbf{x}, t) e^{-i \mathbf{k} \cdot \mathbf{x}}\end{aligned} \hspace{\stretch{1}}(3.28)

## Application to the wave equation

\begin{aligned}0 &= \left( \frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta \right) \mathbf{A}(\mathbf{x}, t) \\ &=\left( \frac{1}{{c^2}} \frac{\partial^2 {{}}}{\partial {{t}}^2} - \Delta \right) \int \frac{d^3 \mathbf{k}}{(2\pi)^3} \tilde{\mathbf{A}}(\mathbf{k}, t) e^{i \mathbf{k} \cdot \mathbf{x}} \\ &=\int \frac{d^3 \mathbf{k}}{(2\pi)^3} \left( \frac{1}{{c^2}} \partial_{tt} \tilde{\mathbf{A}}(\mathbf{k}, t) + \mathbf{k}^2 \mathbf{A}(\mathbf{k}, t)\right)e^{i \mathbf{k} \cdot \mathbf{x}} \end{aligned}

Now operate with $\int d^3 \mathbf{x} e^{-i \mathbf{p} \cdot \mathbf{x} }$

\begin{aligned}0 &=\int d^3 \mathbf{x} e^{-i \mathbf{p} \cdot \mathbf{x} }\int \frac{d^3 \mathbf{k}}{(2\pi)^3} \left( \frac{1}{{c^2}} \partial_{tt} \tilde{\mathbf{A}}(\mathbf{k}, t) + \mathbf{k}^2 \mathbf{A}(\mathbf{k}, t)\right)e^{i \mathbf{k} \cdot \mathbf{x}} \\ &=\int d^3 \mathbf{k}\delta^3(\mathbf{p} -\mathbf{k}) \left( \frac{1}{{c^2}} \partial_{tt} \tilde{\mathbf{A}}(\mathbf{k}, t) + \mathbf{k}^2 \mathbf{A}(\mathbf{k}, t)\right)\end{aligned}

Since this is true for all $\mathbf{p}$ we have

\begin{aligned}\partial_{tt} \tilde{\mathbf{A}}(\mathbf{p}, t) = -c^2 \mathbf{p}^2 \tilde{\mathbf{A}}(\mathbf{p}, t) \end{aligned} \hspace{\stretch{1}}(3.30)

For every value of momentum we have a harmonic osciallator!

\begin{aligned}\dot{d}{x} = -\omega^2 x\end{aligned} \hspace{\stretch{1}}(3.31)

Fourier modes of EM potential in vacuum obey

\begin{aligned}\partial_{tt} \tilde{\mathbf{A}}(\mathbf{k}, t) = -c^2 \mathbf{k}^2 \tilde{\mathbf{A}}(\mathbf{k}, t)\end{aligned} \hspace{\stretch{1}}(3.32)

Because we are operating in the Coulomb gauge we must also have zero divergence. Let’s see how that translates to our Fourier representation

implies

\begin{aligned}0 &= \boldsymbol{\nabla} \cdot \mathbf{A}(\mathbf{x}, t) \\ &= \int \frac{d^3 \mathbf{k} }{(2 \pi)^3} \boldsymbol{\nabla} \cdot \left( e^{i \mathbf{k} \cdot \mathbf{x}} \cdot \tilde{\mathbf{A}}(\mathbf{k}, t) \right)\end{aligned}

The chain rule for the divergence in this case takes the form

\begin{aligned}\boldsymbol{\nabla} \cdot (\phi \mathbf{B}) = (\boldsymbol{\nabla} \phi) \cdot \mathbf{B} + \phi \boldsymbol{\nabla} \cdot \mathbf{B}.\end{aligned} \hspace{\stretch{1}}(3.33)

But since our vector function $\tilde{\mathbf{A}}$ is not a function of spatial coordinates we have

\begin{aligned}0 = \int \frac{d^3 \mathbf{k} }{(2 \pi)^3} e^{i \mathbf{k} \cdot \mathbf{x}} (i \mathbf{k} \cdot \tilde{\mathbf{A}}(\mathbf{k}, t)).\end{aligned} \hspace{\stretch{1}}(3.34)

This has two immediate consequences. The first is that our momentum space potential is perpendicular to the wave number vector at all points in momentum space, and the second gives us a conjugate relation (substitute $\mathbf{k} \rightarrow -\mathbf{k}'$ after taking conjugates for that one)

\begin{aligned}\mathbf{k} \cdot \tilde{\mathbf{A}}(\mathbf{k}, t) &= 0 \\ \tilde{\mathbf{A}}(-\mathbf{k}, t) &= \tilde{\mathbf{A}}^{*}(\mathbf{k}, t).\end{aligned} \hspace{\stretch{1}}(3.35)

\begin{aligned}\mathbf{A}(\mathbf{x}, t) = \int \frac{d^3 \mathbf{k}}{(2\pi)^3} e^{i \mathbf{k} \cdot \mathbf{x}} \left( \frac{1}{{2}} \tilde{\mathbf{A}}(\mathbf{k}, t) + \frac{1}{{2}} \tilde{\mathbf{A}}^{*}(- \mathbf{k}, t) \right)\end{aligned} \hspace{\stretch{1}}(3.37)

Since out system is essentially a harmonic oscillator at each point in momentum space

\begin{aligned}\partial_{tt} \tilde{\mathbf{A}}(\mathbf{k}, t) &= - \omega_k^2 \tilde{\mathbf{A}}(\mathbf{k}, t) \\ \omega_k^2 &= c^2 \mathbf{k}^2\end{aligned} \hspace{\stretch{1}}(3.38)

our general solution is of the form

\begin{aligned}\tilde{\mathbf{A}}(\mathbf{k}, t) &= e^{i \omega_k t} \mathbf{a}_{+}(\mathbf{k}) +e^{-i \omega_k t} \mathbf{a}_{-}(\mathbf{k}) \\ \tilde{\mathbf{A}}^{*}(\mathbf{k}, t) &= e^{-i \omega_k t} \mathbf{a}_{+}^{*}(\mathbf{k}) +e^{i \omega_k t} \mathbf{a}_{-}^{*}(\mathbf{k})\end{aligned} \hspace{\stretch{1}}(3.40)

\begin{aligned}\mathbf{A}(\mathbf{x}, t) = \int \frac{d^3 \mathbf{k}}{(2 pi)^3} e^{i \mathbf{k} \cdot \mathbf{x}} \frac{1}{{2}} \left( e^{i \omega_k t} (\mathbf{a}_{+}(\mathbf{k}) + \mathbf{a}_{-}^{*}(-\mathbf{k})) +e^{-i \omega_k t} (\mathbf{a}_{-}(\mathbf{k}) + \mathbf{a}_{+}^{*}(-\mathbf{k})) \right)\end{aligned} \hspace{\stretch{1}}(3.42)

Define

\begin{aligned}\boldsymbol{\beta}(\mathbf{k}) \equiv \frac{1}{{2}} (\mathbf{a}_{-}(\mathbf{k}) + \mathbf{a}_{+}^{*}(-\mathbf{k}) )\end{aligned} \hspace{\stretch{1}}(3.43)

so that

\begin{aligned}\boldsymbol{\beta}(-\mathbf{k}) = \frac{1}{{2}} (\mathbf{a}_{+}^{*}(\mathbf{k}) + \mathbf{a}_{-}(-\mathbf{k}))\end{aligned} \hspace{\stretch{1}}(3.44)

Our solution now takes the form

\begin{aligned}\mathbf{A}(\mathbf{x}, t) = \int \frac{d^3\mathbf{k}}{(2 \pi)^3} \left( e^{i (\mathbf{k} \cdot \mathbf{x} + \omega_k t)} \boldsymbol{\beta}^{*}(-\mathbf{k})+e^{i (\mathbf{k} \cdot \mathbf{x} - \omega_k t)} \boldsymbol{\beta}(\mathbf{k})\right)\end{aligned} \hspace{\stretch{1}}(3.45)

\paragraph{Claim:}

This is now manifestly real. To see this, consider the first term with $\mathbf{k} = -\mathbf{k}'$, noting that $\int_{-\infty}^\infty dk = \int_{\infty}^\infty -dk' = \int_{-\infty}^\infty dk'$ with $dk = -dk'$

\begin{aligned}\int \frac{d^3\mathbf{k}'}{(2 \pi)^3} e^{i (-\mathbf{k}' \cdot \mathbf{x} + \omega_k t)} \boldsymbol{\beta}^{*}(\mathbf{k}')\end{aligned} \hspace{\stretch{1}}(3.46)

Dropping primes this is the conjugate of the second term.

\paragraph{Claim:}

We have $\mathbf{k} \cdot \boldsymbol{\beta}(\mathbf{k}) = 0$.

Since we have $\mathbf{k} \cdot \tilde{\mathbf{A}}(\mathbf{k}, t) = 0$, 3.40 implies that we have $\mathbf{k} \cdot \mathbf{a}_{\pm}(\mathbf{k}) = 0$. With each of these vector integration constants being perpendicular to $\mathbf{k}$ at that point in momentum space, so must be the linear combination of these constants $\boldsymbol{\beta}(\mathbf{k})$.

# References

[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.

## Electrodynamic field energy for vacuum (reworked)

Posted by peeterjoot on December 21, 2009

# Previous version.

Reducing the products in the Dirac basis makes life more complicated then it needs to be (became obvious when attempting to derive an expression for the Poynting integral).

# Motivation.

From Energy and momentum for Complex electric and magnetic field phasors [PDF] how to formulate the energy momentum tensor for complex vector fields (ie. phasors) in the Geometric Algebra formalism is now understood. To recap, for the field $F = \mathbf{E} + I c \mathbf{B}$, where $\mathbf{E}$ and $\mathbf{B}$ may be complex vectors we have for Maxwell’s equation

\begin{aligned}\nabla F = J/\epsilon_0 c.\end{aligned} \quad\quad\quad(1)

This is a doubly complex representation, with the four vector pseudoscalar $I = \gamma_0 \gamma_1 \gamma_2 \gamma_3$ acting as a non-commutatitive imaginary, as well as real and imaginary parts for the electric and magnetic field vectors. We take the real part (not the scalar part) of any bivector solution $F$ of Maxwell’s equation as the actual solution, but allow ourself the freedom to work with the complex phasor representation when convenient. In these phasor vectors, the imaginary $i$, as in $\mathbf{E} = \text{Real}(\mathbf{E}) + i \text{Imag}(\mathbf{E})$, is a commuting imaginary, commuting with all the multivector elements in the algebra.

The real valued, four vector, energy momentum tensor $T(a)$ was found to be

\begin{aligned}T(a) = \frac{\epsilon_0}{4} \Bigl( {{F}}^{*} a \tilde{F} + \tilde{F} a {{F}}^{*} \Bigr) = -\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} a F \Bigr).\end{aligned} \quad\quad\quad(2)

To supply some context that gives meaning to this tensor the associated conservation relationship was found to be

\begin{aligned}\nabla \cdot T(a) &= a \cdot \frac{1}{{ c }} \text{Real} \left( J \cdot {{F}}^{*} \right).\end{aligned} \quad\quad\quad(3)

and in particular for $a = \gamma^0$, this four vector divergence takes the form

\begin{aligned}\frac{\partial {}}{\partial {t}}\frac{\epsilon_0}{2}(\mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*})+ \boldsymbol{\nabla} \cdot \frac{1}{{\mu_0}} \text{Real} (\mathbf{E} \times {\mathbf{B}}^{*} )+ \text{Real}( \mathbf{J} \cdot {\mathbf{E}}^{*} ) = 0,\end{aligned} \quad\quad\quad(4)

relating the energy term $T^{00} = T(\gamma^0) \cdot \gamma^0$ and the Poynting spatial vector $T(\gamma^0) \wedge \gamma^0$ with the current density and electric field product that constitutes the energy portion of the Lorentz force density.

Let’s apply this to calculating the energy associated with the field that is periodic within a rectangular prism as done by Bohm in [2]. We do not necessarily need the Geometric Algebra formalism for this calculation, but this will be a fun way to attempt it.

# Setup

Let’s assume a Fourier representation for the four vector potential $A$ for the field $F = \nabla \wedge A$. That is

\begin{aligned}A = \sum_{\mathbf{k}} A_\mathbf{k}(t) e^{i \mathbf{k} \cdot \mathbf{x}},\end{aligned} \quad\quad\quad(5)

where summation is over all angular wave number triplets $\mathbf{k} = 2 \pi (k_1/\lambda_1, k_2/\lambda_2, k_3/\lambda_3)$. The Fourier coefficients $A_\mathbf{k} = {A_\mathbf{k}}^\mu \gamma_\mu$ are allowed to be complex valued, as is the resulting four vector $A$, and the associated bivector field $F$.

Fourier inversion, with $V = \lambda_1 \lambda_2 \lambda_3$, follows from

\begin{aligned}\delta_{\mathbf{k}', \mathbf{k}} =\frac{1}{{ V }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} e^{ i \mathbf{k}' \cdot \mathbf{x}} e^{-i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3,\end{aligned} \quad\quad\quad(6)

but only this orthogonality relationship and not the Fourier coefficients themselves

\begin{aligned}A_\mathbf{k} = \frac{1}{{ V }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} A(\mathbf{x}, t) e^{- i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3,\end{aligned} \quad\quad\quad(7)

will be of interest here. Evaluating the curl for this potential yields

\begin{aligned}F = \nabla \wedge A= \sum_{\mathbf{k}} \left( \frac{1}{{c}} \gamma^0 \wedge \dot{A}_\mathbf{k} + \gamma^m \wedge A_\mathbf{k} \frac{2 \pi i k_m}{\lambda_m} \right) e^{i \mathbf{k} \cdot \mathbf{x}}.\end{aligned} \quad\quad\quad(8)

Since the four vector potential has been expressed using an explicit split into time and space components it will be natural to re express the bivector field in terms of scalar and (spatial) vector potentials, with the Fourier coefficients. Writing $\sigma_m = \gamma_m \gamma_0$ for the spatial basis vectors, ${A_\mathbf{k}}^0 = \phi_\mathbf{k}$, and $\mathbf{A} = A^k \sigma_k$, this is

\begin{aligned}A_\mathbf{k} = (\phi_\mathbf{k} + \mathbf{A}_\mathbf{k}) \gamma_0.\end{aligned} \quad\quad\quad(9)

The Faraday bivector field $F$ is then

\begin{aligned}F = \sum_\mathbf{k} \left( -\frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} - i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) e^{i \mathbf{k} \cdot \mathbf{x}}.\end{aligned} \quad\quad\quad(10)

This is now enough to express the energy momentum tensor $T(\gamma^\mu)$

\begin{aligned}T(\gamma^\mu) &= -\frac{\epsilon_0}{2} \sum_{\mathbf{k},\mathbf{k}'}\text{Real} \left(\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}'})}}^{*} + i \mathbf{k}' {{\phi_{\mathbf{k}'}}}^{*} - i \mathbf{k}' \wedge {{\mathbf{A}_{\mathbf{k}'}}}^{*} \right) \gamma^\mu \left( -\frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} - i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) e^{i (\mathbf{k} -\mathbf{k}') \cdot \mathbf{x}}\right).\end{aligned} \quad\quad\quad(11)

It will be more convenient to work with a scalar plus bivector (spatial vector) form of this tensor, and right multiplication by $\gamma_0$ produces such a split

\begin{aligned}T(\gamma^\mu) \gamma_0 = \left\langle{{T(\gamma^\mu) \gamma_0}}\right\rangle + \sigma_a \left\langle{{ \sigma_a T(\gamma^\mu) \gamma_0 }}\right\rangle\end{aligned} \quad\quad\quad(12)

The primary object of this treatment will be consideration of the $\mu = 0$ components of the tensor, which provide a split into energy density $T(\gamma^0) \cdot \gamma_0$, and Poynting vector (momentum density) $T(\gamma^0) \wedge \gamma_0$.

Our first step is to integrate (12) over the volume $V$. This integration and the orthogonality relationship (6), removes the exponentials, leaving

\begin{aligned}\int T(\gamma^\mu) \cdot \gamma_0&= -\frac{\epsilon_0 V}{2} \sum_{\mathbf{k}}\text{Real} \left\langle{{\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \gamma^\mu \left( -\frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} - i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) \gamma_0 }}\right\rangle \\ \int T(\gamma^\mu) \wedge \gamma_0&= -\frac{\epsilon_0 V}{2} \sum_{\mathbf{k}}\text{Real} \sigma_a \left\langle{{ \sigma_a\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \gamma^\mu \left( -\frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} - i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) \gamma_0}}\right\rangle \end{aligned} \quad\quad\quad(13)

Because $\gamma_0$ commutes with the spatial bivectors, and anticommutes with the spatial vectors, the remainder of the Dirac basis vectors in these expressions can be eliminated

\begin{aligned}\int T(\gamma^0) \cdot \gamma_0&= -\frac{\epsilon_0 V }{2} \sum_{\mathbf{k}}\text{Real} \left\langle{{\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle \end{aligned} \quad\quad\quad(15)

\begin{aligned}\int T(\gamma^0) \wedge \gamma_0&= -\frac{\epsilon_0 V}{2} \sum_{\mathbf{k}}\text{Real} \sigma_a \left\langle{{ \sigma_a\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle \end{aligned} \quad\quad\quad(16)

\begin{aligned}\int T(\gamma^m) \cdot \gamma_0&= \frac{\epsilon_0 V }{2} \sum_{\mathbf{k}}\text{Real} \left\langle{{\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \sigma_m\left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle \end{aligned} \quad\quad\quad(17)

\begin{aligned}\int T(\gamma^m) \wedge \gamma_0&= \frac{\epsilon_0 V}{2} \sum_{\mathbf{k}}\text{Real} \sigma_a \left\langle{{ \sigma_a\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \sigma_m\left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle.\end{aligned} \quad\quad\quad(18)

# Expanding the energy momentum tensor components.

## Energy

In (15) only the bivector-bivector and vector-vector products produce any scalar grades. Except for the bivector product this can be done by inspection. For that part we utilize the identity

\begin{aligned}\left\langle{{ (\mathbf{k} \wedge \mathbf{a}) (\mathbf{k} \wedge \mathbf{b}) }}\right\rangle= (\mathbf{a} \cdot \mathbf{k}) (\mathbf{b} \cdot \mathbf{k}) - \mathbf{k}^2 (\mathbf{a} \cdot \mathbf{b}).\end{aligned} \quad\quad\quad(19)

This leaves for the energy $H = \int T(\gamma^0) \cdot \gamma_0$ in the volume

\begin{aligned}H = \frac{\epsilon_0 V}{2} \sum_\mathbf{k} \left(\frac{1}{{c^2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2 +\mathbf{k}^2 \left( {\left\lvert{\phi_\mathbf{k}}\right\rvert}^2 + {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 \right) - {\left\lvert{\mathbf{k} \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2+ \frac{2}{c} \text{Real} \left( i {{\phi_\mathbf{k}}}^{*} \cdot \dot{\mathbf{A}}_\mathbf{k} \right)\right)\end{aligned} \quad\quad\quad(20)

We are left with a completely real expression, and one without any explicit Geometric Algebra. This does not look like the Harmonic oscillator Hamiltonian that was expected. A gauge transformation to eliminate $\phi_\mathbf{k}$ and an observation about when $\mathbf{k} \cdot \mathbf{A}_\mathbf{k}$ equals zero will give us that, but first lets get the mechanical jobs done, and reduce the products for the field momentum.

## Momentum

Now move on to (16). For the factors other than $\sigma_a$ only the vector-bivector products can contribute to the scalar product. We have two such products, one of the form

\begin{aligned}\sigma_a \left\langle{{ \sigma_a \mathbf{a} (\mathbf{k} \wedge \mathbf{c}) }}\right\rangle&=\sigma_a (\mathbf{c} \cdot \sigma_a) (\mathbf{a} \cdot \mathbf{k}) - \sigma_a (\mathbf{k} \cdot \sigma_a) (\mathbf{a} \cdot \mathbf{c}) \\ &=\mathbf{c} (\mathbf{a} \cdot \mathbf{k}) - \mathbf{k} (\mathbf{a} \cdot \mathbf{c}),\end{aligned}

and the other

\begin{aligned}\sigma_a \left\langle{{ \sigma_a (\mathbf{k} \wedge \mathbf{c}) \mathbf{a} }}\right\rangle&=\sigma_a (\mathbf{k} \cdot \sigma_a) (\mathbf{a} \cdot \mathbf{c}) - \sigma_a (\mathbf{c} \cdot \sigma_a) (\mathbf{a} \cdot \mathbf{k}) \\ &=\mathbf{k} (\mathbf{a} \cdot \mathbf{c}) - \mathbf{c} (\mathbf{a} \cdot \mathbf{k}).\end{aligned}

The momentum $\mathbf{P} = \int T(\gamma^0) \wedge \gamma_0$ in this volume follows by computation of

\begin{aligned}&\sigma_a \left\langle{{ \sigma_a\left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} - i \mathbf{k} \wedge {{\mathbf{A}_{\mathbf{k}}}}^{*} \right) \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} + i \mathbf{k} \wedge \mathbf{A}_\mathbf{k} \right) }}\right\rangle \\ &= i \mathbf{A}_\mathbf{k} \left( \left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} \right) \cdot \mathbf{k} \right) - i \mathbf{k} \left( \left( -\frac{1}{{c}} {{(\dot{\mathbf{A}}_{\mathbf{k}})}}^{*} + i \mathbf{k} {{\phi_{\mathbf{k}}}}^{*} \right) \cdot \mathbf{A}_\mathbf{k} \right) \\ &- i \mathbf{k} \left( \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} \right) \cdot {{\mathbf{A}_\mathbf{k}}}^{*} \right) + i {{\mathbf{A}_{\mathbf{k}}}}^{*} \left( \left( \frac{1}{{c}} \dot{\mathbf{A}}_\mathbf{k} + i \mathbf{k} \phi_\mathbf{k} \right) \cdot \mathbf{k} \right)\end{aligned}

All the products are paired in nice conjugates, taking real parts, and premultiplication with $-\epsilon_0 V/2$ gives the desired result. Observe that two of these terms cancel, and another two have no real part. Those last are

\begin{aligned}-\frac{\epsilon_0 V \mathbf{k}}{2 c} \text{Real} \left( i {{(\dot{\mathbf{A}}_\mathbf{k}}}^{*} \cdot \mathbf{A}_\mathbf{k}+\dot{\mathbf{A}}_\mathbf{k} \cdot {{\mathbf{A}_\mathbf{k}}}^{*} \right)&=-\frac{\epsilon_0 V \mathbf{k}}{2 c} \text{Real} \left( i \frac{d}{dt} \mathbf{A}_\mathbf{k} \cdot {{\mathbf{A}_\mathbf{k}}}^{*} \right)\end{aligned}

Taking the real part of this pure imaginary $i {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2$ is zero, leaving just

\begin{aligned}\mathbf{P} &= \epsilon_0 V \sum_{\mathbf{k}}\text{Real} \left(i \mathbf{A}_\mathbf{k} \left( \frac{1}{{c}} {{\dot{\mathbf{A}}_\mathbf{k}}}^{*} \cdot \mathbf{k} \right)+ \mathbf{k}^2 \phi_\mathbf{k} {{ \mathbf{A}_\mathbf{k} }}^{*}- \mathbf{k} {{\phi_\mathbf{k}}}^{*} (\mathbf{k} \cdot \mathbf{A}_\mathbf{k})\right)\end{aligned} \quad\quad\quad(21)

I am not sure why exactly, but I actually expected a term with ${\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2$, quadratic in the vector potential. Is there a mistake above?

## Gauge transformation to simplify the Hamiltonian.

In (20) something that looked like the Harmonic oscillator was expected. On the surface this does not appear to be such a beast. Exploitation of gauge freedom is required to make the simplification that puts things into the Harmonic oscillator form.

If we are to change our four vector potential $A \rightarrow A + \nabla \psi$, then Maxwell’s equation takes the form

\begin{aligned}J/\epsilon_0 c = \nabla (\nabla \wedge (A + \nabla \psi) = \nabla (\nabla \wedge A) + \nabla (\underbrace{\nabla \wedge \nabla \psi}_{=0}),\end{aligned} \quad\quad\quad(22)

which is unchanged by the addition of the gradient to any original potential solution to the equation. In coordinates this is a transformation of the form

\begin{aligned}A^\mu \rightarrow A^\mu + \partial_\mu \psi,\end{aligned} \quad\quad\quad(23)

and we can use this to force any one of the potential coordinates to zero. For this problem, it appears that it is desirable to seek a $\psi$ such that $A^0 + \partial_0 \psi = 0$. That is

\begin{aligned}\sum_\mathbf{k} \phi_\mathbf{k}(t) e^{i \mathbf{k} \cdot \mathbf{x}} + \frac{1}{{c}} \partial_t \psi = 0.\end{aligned} \quad\quad\quad(24)

Or,

\begin{aligned}\psi(\mathbf{x},t) = \psi(\mathbf{x},0) -\frac{1}{{c}} \sum_\mathbf{k} e^{i \mathbf{k} \cdot \mathbf{x}} \int_{\tau=0}^t \phi_\mathbf{k}(\tau).\end{aligned} \quad\quad\quad(25)

With such a transformation, the $\phi_\mathbf{k}$ and $\dot{\mathbf{A}}_\mathbf{k}$ cross term in the Hamiltonian (20) vanishes, as does the $\phi_\mathbf{k}$ term in the four vector square of the last term, leaving just

\begin{aligned}H = \frac{\epsilon_0}{c^2} V \sum_\mathbf{k}\left(\frac{1}{{2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{1}{{2}} \Bigl((c \mathbf{k})^2 {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 + {\left\lvert{ ( c \mathbf{k}) \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2+ {\left\lvert{ c \mathbf{k} \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2\Bigr)\right).\end{aligned} \quad\quad\quad(26)

Additionally, wedging (5) with $\gamma_0$ now does not loose any information so our potential Fourier series is reduced to just

\begin{aligned}\mathbf{A} &= \sum_{\mathbf{k}} \mathbf{A}_\mathbf{k}(t) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}} \\ \mathbf{A}_\mathbf{k} &= \frac{1}{{ V }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} \mathbf{A}(\mathbf{x}, t) e^{-i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3.\end{aligned} \quad\quad\quad(27)

The desired harmonic oscillator form would be had in (26) if it were not for the $\mathbf{k} \cdot \mathbf{A}_\mathbf{k}$ term. Does that vanish? Returning to Maxwell’s equation should answer that question, but first it has to be expressed in terms of the vector potential. While $\mathbf{A} = A \wedge \gamma_0$, the lack of an $A^0$ component means that this can be inverted as

\begin{aligned}A = \mathbf{A} \gamma_0 = -\gamma_0 \mathbf{A}.\end{aligned} \quad\quad\quad(29)

The gradient can also be factored scalar and spatial vector components

\begin{aligned}\nabla = \gamma^0 ( \partial_0 + \boldsymbol{\nabla} ) = ( \partial_0 - \boldsymbol{\nabla} ) \gamma^0.\end{aligned} \quad\quad\quad(30)

So, with this $A^0 = 0$ gauge choice the bivector field $F$ is

\begin{aligned}F = \nabla \wedge A = \frac{1}{{2}} \left( \stackrel{ \rightarrow }{\nabla} A - A \stackrel{ \leftarrow }{\nabla} \right) \end{aligned} \quad\quad\quad(31)

From the left the gradient action on $A$ is

\begin{aligned}\stackrel{ \rightarrow }{\nabla} A &= ( \partial_0 - \boldsymbol{\nabla} ) \gamma^0 (-\gamma_0 \mathbf{A}) \\ &= ( -\partial_0 + \stackrel{ \rightarrow }{\boldsymbol{\nabla}} ) \mathbf{A},\end{aligned}

and from the right

\begin{aligned}A \stackrel{ \leftarrow }{\nabla}&= \mathbf{A} \gamma_0 \gamma^0 ( \partial_0 + \boldsymbol{\nabla} ) \\ &= \mathbf{A} ( \partial_0 + \boldsymbol{\nabla} ) \\ &= \partial_0 \mathbf{A} + \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}} \end{aligned}

Taking the difference we have

\begin{aligned}F &= \frac{1}{{2}} \Bigl( -\partial_0 \mathbf{A} + \stackrel{ \rightarrow }{\boldsymbol{\nabla}} \mathbf{A} - \partial_0 \mathbf{A} - \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}} \Bigr).\end{aligned}

Which is just

\begin{aligned}F = -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A}.\end{aligned} \quad\quad\quad(32)

For this vacuum case, premultiplication of Maxwell’s equation by $\gamma_0$ gives

\begin{aligned}0 &= \gamma_0 \nabla ( -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &= (\partial_0 + \boldsymbol{\nabla})( -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} - \partial_0 \boldsymbol{\nabla} \cdot \mathbf{A} - \partial_0 \boldsymbol{\nabla} \wedge \mathbf{A} + \partial_0 ( \boldsymbol{\nabla} \wedge \mathbf{A} ) + \underbrace{\boldsymbol{\nabla} \cdot ( \boldsymbol{\nabla} \wedge \mathbf{A} ) }_{\boldsymbol{\nabla}^2 \mathbf{A} - \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A})}+ \underbrace{\boldsymbol{\nabla} \wedge ( \boldsymbol{\nabla} \wedge \mathbf{A} )}_{=0} \\ \end{aligned}

The spatial bivector and trivector grades are all zero. Equating the remaining scalar and vector components to zero separately yields a pair of equations in $\mathbf{A}$

\begin{aligned}0 &= \partial_t (\boldsymbol{\nabla} \cdot \mathbf{A}) \\ 0 &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} + \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}) \end{aligned} \quad\quad\quad(33)

If the divergence of the vector potential is constant we have just a wave equation. Let’s see what that divergence is with the assumed Fourier representation

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{A} &=\sum_{\mathbf{k} \ne (0,0,0)} {\mathbf{A}_\mathbf{k}}^m 2 \pi i \frac{k_m}{\lambda_m} e^{i \mathbf{k} \cdot \mathbf{x}} \\ &=i \sum_{\mathbf{k} \ne (0,0,0)} (\mathbf{A}_\mathbf{k} \cdot \mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x}} \\ &=i \sum_\mathbf{k} (\mathbf{A}_\mathbf{k} \cdot \mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{x}} \end{aligned}

Since $\mathbf{A}_\mathbf{k} = \mathbf{A}_\mathbf{k}(t)$, there are two ways for $\partial_t (\boldsymbol{\nabla} \cdot \mathbf{A}) = 0$. For each $\mathbf{k}$ there must be a requirement for either $\mathbf{A}_\mathbf{k} \cdot \mathbf{k} = 0$ or $\mathbf{A}_\mathbf{k} = \text{constant}$. The constant $\mathbf{A}_\mathbf{k}$ solution to the first equation appears to represent a standing spatial wave with no time dependence. Is that of any interest?

The more interesting seeming case is where we have some non-static time varying state. In this case, if $\mathbf{A}_\mathbf{k} \cdot \mathbf{k}$, the second of these Maxwell’s equations is just the vector potential wave equation, since the divergence is zero. That is

\begin{aligned}0 &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} + \boldsymbol{\nabla}^2 \mathbf{A} \end{aligned} \quad\quad\quad(35)

Solving this isn’t really what is of interest, since the objective was just to determine if the divergence could be assumed to be zero. This shows then, that if the transverse solution to Maxwell’s equation is picked, the Hamiltonian for this field, with this gauge choice, becomes

\begin{aligned}H = \frac{\epsilon_0}{c^2} V \sum_\mathbf{k}\left(\frac{1}{{2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{1}{{2}} (c \mathbf{k})^2 {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 \right).\end{aligned} \quad\quad\quad(36)

How does the gauge choice alter the Poynting vector? From (21), all the $\phi_\mathbf{k}$ dependence in that integrated momentum density is lost

\begin{aligned}\mathbf{P} &= \epsilon_0 V \sum_{\mathbf{k}}\text{Real} \left(i \mathbf{A}_\mathbf{k} \left( \frac{1}{{c}} {{\dot{\mathbf{A}}_\mathbf{k}}}^{*} \cdot \mathbf{k} \right)\right).\end{aligned} \quad\quad\quad(37)

The $\mathbf{A}_\mathbf{k} \cdot \mathbf{k}$ solutions to Maxwell’s equation are seen to result in zero momentum for this infinite periodic field. My expectation was something of the form $c \mathbf{P} = H \hat{\mathbf{k}}$, so intuition is either failing me, or my math is failing me, or this contrived periodic field solution leads to trouble.

# Conclusions and followup.

The objective was met, a reproduction of Bohm’s Harmonic oscillator result using a complex exponential Fourier series instead of separate sine and cosines.

The reason for Bohm’s choice to fix zero divergence as the gauge choice upfront is now clear. That automatically cuts complexity from the results. Figuring out how to work this problem with complex valued potentials and also using the Geometric Algebra formulation probably also made the work a bit more difficult since blundering through both simultaneously was required instead of just one at a time.

This was an interesting exercise though, since doing it this way I am able to understand all the intermediate steps. Bohm employed some subtler argumentation to eliminate the scalar potential $\phi$ upfront, and I have to admit I did not follow his logic, whereas blindly following where the math leads me all makes sense.

As a bit of followup, I’d like to consider the constant $\mathbf{A}_\mathbf{k}$ case in more detail, and any implications of the freedom to pick $\mathbf{A}_0$.

The general calculation of $T^{\mu\nu}$ for the assumed Fourier solution should be possible too, but was not attempted. Doing that general calculation with a four dimensional Fourier series is likely tidier than working with scalar and spatial variables as done here.

Now that the math is out of the way (except possibly for the momentum which doesn’t seem right), some discussion of implications and applications is also in order. My preference is to let the math sink-in a bit first and mull over the momentum issues at leisure.

# References

[2] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

## Electrodynamic field energy for vacuum.

Posted by peeterjoot on December 19, 2009

# Motivation.

We now know how to formulate the energy momentum tensor for complex vector fields (ie. phasors) in the Geometric Algebra formalism. To recap, for the field $F = \mathbf{E} + I c \mathbf{B}$, where $\mathbf{E}$ and $\mathbf{B}$ may be complex vectors we have for Maxwell’s equation

\begin{aligned}\nabla F = J/\epsilon_0 c.\end{aligned} \quad\quad\quad(1)

This is a doubly complex representation, with the four vector pseudoscalar $I = \gamma_0 \gamma_1 \gamma_2 \gamma_3$ acting as a non-commutatitive imaginary, as well as real and imaginary parts for the electric and magnetic field vectors. We take the real part (not the scalar part) of any bivector solution $F$ of Maxwell’s equation as the actual solution, but allow ourself the freedom to work with the complex phasor representation when convenient. In these phasor vectors, the imaginary $i$, as in $\mathbf{E} = \text{Real}(\mathbf{E}) + i \text{Imag}(\mathbf{E})$, is a commuting imaginary, commuting with all the multivector elements in the algebra.

The real valued, four vector, energy momentum tensor $T(a)$ was found to be

\begin{aligned}T(a) = \frac{\epsilon_0}{4} \Bigl( {{F}}^{*} a \tilde{F} + \tilde{F} a {{F}}^{*} \Bigr) = -\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} a F \Bigr).\end{aligned} \quad\quad\quad(2)

To supply some context that gives meaning to this tensor the associated conservation relationship was found to be

\begin{aligned}\nabla \cdot T(a) &= a \cdot \frac{1}{{ c }} \text{Real} \left( J \cdot {{F}}^{*} \right).\end{aligned} \quad\quad\quad(3)

and in particular for $a = \gamma^0$, this four vector divergence takes the form

\begin{aligned}\frac{\partial {}}{\partial {t}}\frac{\epsilon_0}{2}(\mathbf{E} \cdot {\mathbf{E}}^{*} + c^2 \mathbf{B} \cdot {\mathbf{B}}^{*})+ \boldsymbol{\nabla} \cdot \frac{1}{{\mu_0}} \text{Real} (\mathbf{E} \times {\mathbf{B}}^{*} )+ \text{Real}( \mathbf{J} \cdot {\mathbf{E}}^{*} ) = 0,\end{aligned} \quad\quad\quad(4)

relating the energy term $T^{00} = T(\gamma^0) \cdot \gamma^0$ and the Poynting spatial vector $T(\gamma^0) \wedge \gamma^0$ with the current density and electric field product that constitutes the energy portion of the Lorentz force density.

Let’s apply this to calculating the energy associated with the field that is periodic within a rectangular prism as done by Bohm in [1]. We do not necessarily need the Geometric Algebra formalism for this calculation, but this will be a fun way to attempt it.

# Setup

Let’s assume a Fourier representation for the four vector potential $A$ for the field $F = \nabla \wedge A$. That is

\begin{aligned}A = \sum_{\mathbf{k}} A_\mathbf{k}(t) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}},\end{aligned} \quad\quad\quad(5)

where summation is over all wave number triplets $\mathbf{k} = (p/\lambda_1,q/\lambda_2,r/\lambda_3)$. The Fourier coefficients $A_\mathbf{k} = {A_\mathbf{k}}^\mu \gamma_\mu$ are allowed to be complex valued, as is the resulting four vector $A$, and the associated bivector field $F$.

Fourier inversion follows from

\begin{aligned}\delta_{\mathbf{k}', \mathbf{k}} =\frac{1}{{ \lambda_1 \lambda_2 \lambda_3 }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} e^{2 \pi i \mathbf{k}' \cdot \mathbf{x}} e^{-2 \pi i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3,\end{aligned} \quad\quad\quad(6)

but only this orthogonality relationship and not the Fourier coefficients themselves

\begin{aligned}A_\mathbf{k} = \frac{1}{{ \lambda_1 \lambda_2 \lambda_3 }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} A(\mathbf{x}, t) e^{-2 \pi i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3,\end{aligned} \quad\quad\quad(7)

will be of interest here. Evaluating the curl for this potential yields

\begin{aligned}F = \nabla \wedge A= \sum_{\mathbf{k}} \left( \frac{1}{{c}} \gamma^0 \wedge \dot{A}_\mathbf{k} + \sum_{m=1}^3 \gamma^m \wedge A_\mathbf{k} \frac{2 \pi i k_m}{\lambda_m} \right) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}}.\end{aligned} \quad\quad\quad(8)

We can now form the energy density

\begin{aligned}U = T(\gamma^0) \cdot \gamma^0=-\frac{\epsilon_0}{2} \text{Real} \Bigl( {{F}}^{*} \gamma^0 F \gamma^0 \Bigr).\end{aligned} \quad\quad\quad(9)

With implied summation over all repeated integer indexes (even without matching uppers and lowers), this is

\begin{aligned}U =-\frac{\epsilon_0}{2} \sum_{\mathbf{k}', \mathbf{k}} \text{Real} \left\langle{{\left( \frac{1}{{c}} \gamma^0 \wedge {{\dot{A}_{\mathbf{k}'}}}^{*} - \gamma^m \wedge {{A_{\mathbf{k}'}}}^{*} \frac{2 \pi i k_m'}{\lambda_m} \right) e^{-2 \pi i \mathbf{k}' \cdot \mathbf{x}}\gamma^0\left( \frac{1}{{c}} \gamma^0 \wedge \dot{A}_\mathbf{k} + \gamma^n \wedge A_\mathbf{k} \frac{2 \pi i k_n}{\lambda_n} \right) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}}\gamma^0}}\right\rangle.\end{aligned} \quad\quad\quad(10)

The grade selection used here doesn’t change the result since we already have a scalar, but will just make it convenient to filter out any higher order products that will cancel anyways. Integrating over the volume element and taking advantage of the orthogonality relationship (6), the exponentials are removed, leaving the energy contained in the volume

\begin{aligned}H = -\frac{\epsilon_0 \lambda_1 \lambda_2 \lambda_3}{2}\sum_{\mathbf{k}} \text{Real} \left\langle{{\left( \frac{1}{{c}} \gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} - \gamma^m \wedge {{A_{\mathbf{k}}}}^{*} \frac{2 \pi i k_m}{\lambda_m} \right) \gamma^0\left( \frac{1}{{c}} \gamma^0 \wedge \dot{A}_\mathbf{k} + \gamma^n \wedge A_\mathbf{k} \frac{2 \pi i k_n}{\lambda_n} \right) \gamma^0}}\right\rangle.\end{aligned} \quad\quad\quad(11)

# First reduction of the Hamiltonian.

Let’s take the products involved in sequence one at a time, and evaluate, later adding and taking real parts if required all of

\begin{aligned}\frac{1}{{c^2}}\left\langle{{ (\gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} ) \gamma^0 (\gamma^0 \wedge \dot{A}_\mathbf{k}) \gamma^0 }}\right\rangle &=-\frac{1}{{c^2}}\left\langle{{ (\gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} ) (\gamma^0 \wedge \dot{A}_\mathbf{k}) }}\right\rangle \end{aligned} \quad\quad\quad(12)

\begin{aligned}- \frac{2 \pi i k_m}{c \lambda_m} \left\langle{{ (\gamma^m \wedge {{A_{\mathbf{k}}}}^{*} ) \gamma^0 ( \gamma^0 \wedge \dot{A}_\mathbf{k} ) \gamma^0}}\right\rangle &=\frac{2 \pi i k_m}{c \lambda_m} \left\langle{{ (\gamma^m \wedge {{A_{\mathbf{k}}}}^{*} ) ( \gamma^0 \wedge \dot{A}_\mathbf{k} ) }}\right\rangle \end{aligned} \quad\quad\quad(13)

\begin{aligned}\frac{2 \pi i k_n}{c \lambda_n} \left\langle{{ ( \gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} ) \gamma^0 ( \gamma^n \wedge A_\mathbf{k} ) \gamma^0}}\right\rangle &=-\frac{2 \pi i k_n}{c \lambda_n} \left\langle{{ ( \gamma^0 \wedge {{\dot{A}_{\mathbf{k}}}}^{*} ) ( \gamma^n \wedge A_\mathbf{k} ) }}\right\rangle \end{aligned} \quad\quad\quad(14)

\begin{aligned}-\frac{4 \pi^2 k_m k_n}{\lambda_m \lambda_n}\left\langle{{ (\gamma^m \wedge {{A_{\mathbf{k}}}}^{*} ) \gamma^0(\gamma^n \wedge A_\mathbf{k} ) \gamma^0}}\right\rangle. &\end{aligned} \quad\quad\quad(15)

The expectation is to obtain a Hamiltonian for the field that has the structure of harmonic oscillators, where the middle two products would have to be zero or sum to zero or have real parts that sum to zero. The first is expected to contain only products of ${\left\lvert{{\dot{A}_\mathbf{k}}^m}\right\rvert}^2$, and the last only products of ${\left\lvert{{A_\mathbf{k}}^m}\right\rvert}^2$.

While initially guessing that (13) and (14) may cancel, this isn’t so obviously the case. The use of cyclic permutation of multivectors within the scalar grade selection operator $\left\langle{{A B}}\right\rangle = \left\langle{{B A}}\right\rangle$ plus a change of dummy summation indexes in one of the two shows that this sum is of the form $Z + {{Z}}^{*}$. This sum is intrinsically real, so we can neglect one of the two doubling the other, but we will still be required to show that the real part of either is zero.

Lets reduce these one at a time starting with (12), and write $\dot{A}_\mathbf{k} = \kappa$ temporarily

\begin{aligned}\left\langle{{ (\gamma^0 \wedge {{\kappa}}^{*} ) (\gamma^0 \wedge \kappa }}\right\rangle &={\kappa^m}^{{*}} \kappa^{m'}\left\langle{{ \gamma^0 \gamma_m \gamma^0 \gamma_{m'} }}\right\rangle \\ &=-{\kappa^m}^{{*}} \kappa^{m'}\left\langle{{ \gamma_m \gamma_{m'} }}\right\rangle \\ &={\kappa^m}^{{*}} \kappa^{m'}\delta_{m m'}.\end{aligned}

So the first of our Hamiltonian terms is

\begin{aligned}\frac{\epsilon_0 \lambda_1 \lambda_2 \lambda_3}{2 c^2}\left\langle{{ (\gamma^0 \wedge {{\dot{A}_\mathbf{k}}}^{*} ) (\gamma^0 \wedge \dot{A}_\mathbf{k} }}\right\rangle &=\frac{\epsilon_0 \lambda_1 \lambda_2 \lambda_3}{2 c^2}{\left\lvert{{{\dot{A}}_{\mathbf{k}}}^m}\right\rvert}^2.\end{aligned} \quad\quad\quad(16)

Note that summation over $m$ is still implied here, so we’d be better off with a spatial vector representation of the Fourier coefficients $\mathbf{A}_\mathbf{k} = A_\mathbf{k} \wedge \gamma_0$. With such a notation, this contribution to the Hamiltonian is

\begin{aligned}\frac{\epsilon_0 \lambda_1 \lambda_2 \lambda_3}{2 c^2} \dot{\mathbf{A}}_\mathbf{k} \cdot {{\dot{\mathbf{A}}_\mathbf{k}}}^{*}.\end{aligned} \quad\quad\quad(17)

To reduce (13) and (13), this time writing $\kappa = A_\mathbf{k}$, we can start with just the scalar selection

\begin{aligned}\left\langle{{ (\gamma^m \wedge {{\kappa}}^{*} ) ( \gamma^0 \wedge \dot{\kappa} ) }}\right\rangle &=\Bigl( \gamma^m {{(\kappa^0)}}^{*} - {{\kappa}}^{*} \underbrace{(\gamma^m \cdot \gamma^0)}_{=0} \Bigr) \cdot \dot{\kappa} \\ &={{(\kappa^0)}}^{*} \dot{\kappa}^m\end{aligned}

Thus the contribution to the Hamiltonian from (13) and (13) is

\begin{aligned}\frac{2 \epsilon_0 \lambda_1 \lambda_2 \lambda_3 \pi k_m}{c \lambda_m} \text{Real} \Bigl( i {{(A_\mathbf{k}^0)}}^{*} \dot{A_\mathbf{k}}^m \Bigl)=\frac{2 \pi \epsilon_0 \lambda_1 \lambda_2 \lambda_3}{c} \text{Real} \Bigl( i {{(A_\mathbf{k}^0)}}^{*} \mathbf{k} \cdot \dot{\mathbf{A}}_\mathbf{k} \Bigl).\end{aligned} \quad\quad\quad(18)

Most definitively not zero in general. Our final expansion (15) is the messiest. Again with $A_\mathbf{k} = \kappa$ for short, the grade selection of this term in coordinates is

\begin{aligned}\left\langle{{ (\gamma^m \wedge {{\kappa}}^{*} ) \gamma^0 (\gamma^n \wedge \kappa ) \gamma^0 }}\right\rangle&=- {{\kappa_\mu}}^{*} \kappa^\nu \left\langle{{ (\gamma^m \wedge \gamma^\mu) \gamma^0 (\gamma_n \wedge \gamma_\nu) \gamma^0 }}\right\rangle\end{aligned} \quad\quad\quad(19)

Expanding this out yields

\begin{aligned}\left\langle{{ (\gamma^m \wedge {{\kappa}}^{*} ) \gamma^0 (\gamma^n \wedge \kappa ) \gamma^0 }}\right\rangle&=- ( {\left\lvert{\kappa_0}\right\rvert}^2 - {\left\lvert{A^a}\right\rvert}^2 ) \delta_{m n} + {{A^n}}^{*} A^m.\end{aligned} \quad\quad\quad(20)

The contribution to the Hamiltonian from this, with $\phi_\mathbf{k} = A^0_\mathbf{k}$, is then

\begin{aligned}2 \pi^2 \epsilon_0 \lambda_1 \lambda_2 \lambda_3 \Bigl(-\mathbf{k}^2 {{\phi_\mathbf{k}}}^{*} \phi_\mathbf{k} + \mathbf{k}^2 ({{\mathbf{A}_\mathbf{k}}}^{*} \cdot \mathbf{A}_\mathbf{k})+ (\mathbf{k} \cdot {{\mathbf{A}_k}}^{*}) (\mathbf{k} \cdot \mathbf{A}_k)\Bigr).\end{aligned} \quad\quad\quad(21)

A final reassembly of the Hamiltonian from the parts (17) and (18) and (21) is then

\begin{aligned}H = \epsilon_0 \lambda_1 \lambda_2 \lambda_3 \sum_\mathbf{k}\left(\frac{1}{{2 c^2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{2 \pi}{c} \text{Real} \Bigl( i {{ \phi_\mathbf{k} }}^{*} (\mathbf{k} \cdot \dot{\mathbf{A}}_\mathbf{k}) \Bigl)+2 \pi^2 \Bigl(\mathbf{k}^2 ( -{\left\lvert{\phi_\mathbf{k}}\right\rvert}^2 + {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 ) + {\left\lvert{\mathbf{k} \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2\Bigr)\right).\end{aligned} \quad\quad\quad(22)

This is finally reduced to a completely real expression, and one without any explicit Geometric Algebra. All the four vector Fourier vector potentials written out explicitly in terms of the spacetime split $A_\mathbf{k} = (\phi_\mathbf{k}, \mathbf{A}_\mathbf{k})$, which is natural since an explicit time and space split was the starting point.

# Gauge transformation to simplify the Hamiltonian.

While (22) has considerably simpler form than (11), what was expected, was something that looked like the Harmonic oscillator. On the surface this does not appear to be such a beast. Exploitation of gauge freedom is required to make the simplification that puts things into the Harmonic oscillator form.

If we are to change our four vector potential $A \rightarrow A + \nabla \psi$, then Maxwell’s equation takes the form

\begin{aligned}J/\epsilon_0 c = \nabla (\nabla \wedge (A + \nabla \psi) = \nabla (\nabla \wedge A) + \nabla (\underbrace{\nabla \wedge \nabla \psi}_{=0}),\end{aligned} \quad\quad\quad(23)

which is unchanged by the addition of the gradient to any original potential solution to the equation. In coordinates this is a transformation of the form

\begin{aligned}A^\mu \rightarrow A^\mu + \partial_\mu \psi,\end{aligned} \quad\quad\quad(24)

and we can use this to force any one of the potential coordinates to zero. For this problem, it appears that it is desirable to seek a $\psi$ such that $A^0 + \partial_0 \psi = 0$. That is

\begin{aligned}\sum_\mathbf{k} \phi_\mathbf{k}(t) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}} + \frac{1}{{c}} \partial_t \psi = 0.\end{aligned} \quad\quad\quad(25)

Or,

\begin{aligned}\psi(\mathbf{x},t) = \psi(\mathbf{x},0) -\frac{1}{{c}} \sum_\mathbf{k} e^{2 \pi i \mathbf{k} \cdot \mathbf{x}} \int_{\tau=0}^t \phi_\mathbf{k}(\tau).\end{aligned} \quad\quad\quad(26)

With such a transformation, the $\phi_\mathbf{k}$ and $\dot{\mathbf{A}}_\mathbf{k}$ cross term in the Hamiltonian (22) vanishes, as does the $\phi_\mathbf{k}$ term in the four vector square of the last term, leaving just

\begin{aligned}H = \frac{\epsilon_0}{c^2} \lambda_1 \lambda_2 \lambda_3 \sum_\mathbf{k}\left(\frac{1}{{2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{1}{{2}} \Bigl((2 \pi c \mathbf{k})^2 {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 + {\left\lvert{ ( 2 \pi c \mathbf{k}) \cdot \mathbf{A}_\mathbf{k}}\right\rvert}^2\Bigr)\right).\end{aligned} \quad\quad\quad(27)

Additionally, wedging (5) with $\gamma_0$ now does not loose any information so our potential Fourier series is reduced to just

\begin{aligned}\mathbf{A} &= \sum_{\mathbf{k}} \mathbf{A}_\mathbf{k}(t) e^{2 \pi i \mathbf{k} \cdot \mathbf{x}} \\ \mathbf{A}_\mathbf{k} &= \frac{1}{{ \lambda_1 \lambda_2 \lambda_3 }}\int_0^{\lambda_1}\int_0^{\lambda_2}\int_0^{\lambda_3} \mathbf{A}(\mathbf{x}, t) e^{-2 \pi i \mathbf{k} \cdot \mathbf{x}} dx^1 dx^2 dx^3.\end{aligned} \quad\quad\quad(28)

The desired harmonic oscillator form would be had in (27) if it were not for the $\mathbf{k} \cdot \mathbf{A}_\mathbf{k}$ term. Does that vanish? Returning to Maxwell’s equation should answer that question, but first it has to be expressed in terms of the vector potential. While $\mathbf{A} = A \wedge \gamma_0$, the lack of an $A^0$ component means that this can be inverted as

\begin{aligned}A = \mathbf{A} \gamma_0 = -\gamma_0 \mathbf{A}.\end{aligned} \quad\quad\quad(30)

The gradient can also be factored scalar and spatial vector components

\begin{aligned}\nabla = \gamma^0 ( \partial_0 + \boldsymbol{\nabla} ) = ( \partial_0 - \boldsymbol{\nabla} ) \gamma^0.\end{aligned} \quad\quad\quad(31)

So, with this $A^0 = 0$ gauge choice the bivector field $F$ is

\begin{aligned}F = \nabla \wedge A = \frac{1}{{2}} \left( \stackrel{ \rightarrow }{\nabla} A - A \stackrel{ \leftarrow }{\nabla} \right) \end{aligned} \quad\quad\quad(32)

From the left the gradient action on $A$ is

\begin{aligned}\stackrel{ \rightarrow }{\nabla} A &= ( \partial_0 - \boldsymbol{\nabla} ) \gamma^0 (-\gamma_0 \mathbf{A}) \\ &= ( -\partial_0 + \stackrel{ \rightarrow }{\boldsymbol{\nabla}} ) \mathbf{A},\end{aligned}

and from the right

\begin{aligned}A \stackrel{ \leftarrow }{\nabla}&= \mathbf{A} \gamma_0 \gamma^0 ( \partial_0 + \boldsymbol{\nabla} ) \\ &= \mathbf{A} ( \partial_0 + \boldsymbol{\nabla} ) \\ &= \partial_0 \mathbf{A} + \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}} \end{aligned}

Taking the difference we have

\begin{aligned}F &= \frac{1}{{2}} \Bigl( -\partial_0 \mathbf{A} + \stackrel{ \rightarrow }{\boldsymbol{\nabla}} \mathbf{A} - \partial_0 \mathbf{A} - \mathbf{A} \stackrel{ \leftarrow }{\boldsymbol{\nabla}} \Bigr).\end{aligned}

Which is just

\begin{aligned}F = -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A}.\end{aligned} \quad\quad\quad(33)

For this vacuum case, premultiplication of Maxwell’s equation by $\gamma_0$ gives

\begin{aligned}0 &= \gamma_0 \nabla ( -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &= (\partial_0 + \boldsymbol{\nabla})( -\partial_0 \mathbf{A} + \boldsymbol{\nabla} \wedge \mathbf{A} ) \\ &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} - \partial_0 \boldsymbol{\nabla} \cdot \mathbf{A} - \partial_0 \boldsymbol{\nabla} \wedge \mathbf{A} + \partial_0 ( \boldsymbol{\nabla} \wedge \mathbf{A} ) + \underbrace{\boldsymbol{\nabla} \cdot ( \boldsymbol{\nabla} \wedge \mathbf{A} ) }_{\boldsymbol{\nabla}^2 \mathbf{A} - \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A})}+ \underbrace{\boldsymbol{\nabla} \wedge ( \boldsymbol{\nabla} \wedge \mathbf{A} )}_{=0} \\ \end{aligned}

The spatial bivector and trivector grades are all zero. Equating the remaining scalar and vector components to zero separately yields a pair of equations in $\mathbf{A}$

\begin{aligned}0 &= \partial_t (\boldsymbol{\nabla} \cdot \mathbf{A}) \\ 0 &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} + \boldsymbol{\nabla}^2 \mathbf{A} + \boldsymbol{\nabla} (\boldsymbol{\nabla} \cdot \mathbf{A}) \end{aligned} \quad\quad\quad(34)

If the divergence of the vector potential is constant we have just a wave equation. Let’s see what that divergence is with the assumed Fourier representation

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{A} &=\sum_{k \ne (0,0,0)} {\mathbf{A}_\mathbf{k}}^m 2 \pi i \frac{k_m}{\lambda_m} e^{2\pi i \mathbf{k} \cdot \mathbf{x}} \\ &=2 \pi i \sum_{k \ne (0,0,0)} (\mathbf{A}_\mathbf{k} \cdot \mathbf{k}) e^{2\pi i \mathbf{k} \cdot \mathbf{x}} \\ \end{aligned}

Since $\mathbf{A}_\mathbf{k} = \mathbf{A}_\mathbf{k}(t)$, there are two ways for $\partial_t (\boldsymbol{\nabla} \cdot \mathbf{A}) = 0$. For each $\mathbf{k} \ne 0$ there must be a requirement for either $\mathbf{A}_\mathbf{k} \cdot \mathbf{k} = 0$ or $\mathbf{A}_\mathbf{k} = \text{constant}$. The constant $\mathbf{A}_\mathbf{k}$ solution to the first equation appears to represent a standing spatial wave with no time dependence. Is that of any interest?

The more interesting seeming case is where we have some non-static time varying state. In this case, if $\mathbf{A}_\mathbf{k} \cdot \mathbf{k}$ for all $\mathbf{k} \ne 0$ the second of these Maxwell’s equations is just the vector potential wave equation, since the divergence is zero. That is

\begin{aligned}0 &= -\frac{1}{{c^2}} \partial_{tt} \mathbf{A} + \boldsymbol{\nabla}^2 \mathbf{A} \end{aligned} \quad\quad\quad(36)

Solving this isn’t really what is of interest, since the objective was just to determine if the divergence could be assumed to be zero. This shows then, that if the transverse solution to Maxwell’s equation is picked, the Hamiltonian for this field, with this gauge choice, becomes

\begin{aligned}H = \frac{\epsilon_0}{c^2} \lambda_1 \lambda_2 \lambda_3 \sum_\mathbf{k}\left(\frac{1}{{2}} {\left\lvert{\dot{\mathbf{A}}_\mathbf{k}}\right\rvert}^2+\frac{1}{{2}} (2 \pi c \mathbf{k})^2 {\left\lvert{\mathbf{A}_\mathbf{k}}\right\rvert}^2 \right).\end{aligned} \quad\quad\quad(37)

# Conclusions and followup.

The objective was met, a reproduction of Bohm’s Harmonic oscillator result using a complex exponential Fourier series instead of separate sine and cosines.

The reason for Bohm’s choice to fix zero divergence as the gauge choice upfront is now clear. That automatically cuts complexity from the results. Figuring out how to work this problem with complex valued potentials and also using the Geometric Algebra formulation probably also made the work a bit more difficult since blundering through both simultaneously was required instead of just one at a time.

This was an interesting exercise though, since doing it this way I am able to understand all the intermediate steps. Bohm employed some subtler argumentation to eliminate the scalar potential $\phi$ upfront, and I have to admit I did not follow his logic, whereas blindly following where the math leads me all makes sense.

As a bit of followup, I’d like to consider the constant $\mathbf{A}_\mathbf{k}$ case, and any implications of the freedom to pick $\mathbf{A}_0$. I’d also like to construct the Poynting vector $T(\gamma^0) \wedge \gamma_0$, and see what the structure of that is with this Fourier representation.

A general calculation of $T^{\mu\nu}$ for an assumed Fourier solution should be possible too, but working in spatial quantities for the general case is probably torture. A four dimensional Fourier series is likely a superior option for the general case.

# References

[1] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.