# Peeter Joot's Blog.

• ## Archives

 bubba transvere on My letter to the Ontario Energ… Kuba Ober on New faucet installation peeterjoot on Ease of screwing up C string… peeterjoot on Basement electrical now d… peeterjoot on New faucet installation
• ## People not reading this blog: 6,973,738,433 minus:

• 136,506 hits

# Posts Tagged ‘plane wave’

## Plane wave solutions of Maxwell’s equation using Geometric Algebra

Posted by peeterjoot on September 3, 2012

# Motivation

Study of reflection and transmission of radiation in isotropic, charge and current free, linear matter utilizes the plane wave solutions to Maxwell’s equations. These have the structure of phasor equations, with some specific constraints on the components and the exponents.

These constraints are usually derived starting with the plain old vector form of Maxwell’s equations, and it is natural to wonder how this is done directly using Geometric Algebra. [1] provides one such derivation, using the covariant form of Maxwell’s equations. Here’s a slightly more pedestrian way of doing the same.

# Maxwell’s equations in media

We start with Maxwell’s equations for linear matter as found in [2]

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} = 0\end{aligned} \hspace{\stretch{1}}(1.2.1a)

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{E} = -\frac{\partial {\mathbf{B}}}{\partial {t}}\end{aligned} \hspace{\stretch{1}}(1.2.1b)

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{B} = 0\end{aligned} \hspace{\stretch{1}}(1.2.1c)

\begin{aligned}\boldsymbol{\nabla} \times \mathbf{B} = \mu\epsilon \frac{\partial {\mathbf{E}}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(1.2.1d)

We merge these using the geometric identity

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{a} + I \boldsymbol{\nabla} \times \mathbf{a} = \boldsymbol{\nabla} \mathbf{a},\end{aligned} \hspace{\stretch{1}}(1.2.2)

where $I$ is the 3D pseudoscalar $I = \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3$, to find

\begin{aligned}\boldsymbol{\nabla} \mathbf{E} = -I \frac{\partial {\mathbf{B}}}{\partial {t}}\end{aligned} \hspace{\stretch{1}}(1.2.3a)

\begin{aligned}\boldsymbol{\nabla} \mathbf{B} = I \mu\epsilon \frac{\partial {\mathbf{E}}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(1.2.3b)

We want dimensions of $1/L$ for the derivative operator on the RHS of 1.2.3b, so we divide through by $\sqrt{\mu\epsilon} I$ for

\begin{aligned}-I \frac{1}{{\sqrt{\mu\epsilon}}} \boldsymbol{\nabla} \mathbf{B} = \sqrt{\mu\epsilon} \frac{\partial {\mathbf{E}}}{\partial {t}}.\end{aligned} \hspace{\stretch{1}}(1.2.4)

This can now be added to 1.2.3a for

\begin{aligned}\left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \left( \mathbf{E} + \frac{I}{\sqrt{\mu\epsilon}} \mathbf{B} \right)= 0.\end{aligned} \hspace{\stretch{1}}(1.2.5)

This is Maxwell’s equation in linear isotropic charge and current free matter in Geometric Algebra form.

# Phasor solutions

We write the electromagnetic field as

\begin{aligned}F = \left( \mathbf{E} + \frac{I}{\sqrt{\mu\epsilon}} \mathbf{B} \right),\end{aligned} \hspace{\stretch{1}}(1.3.6)

so that for vacuum where $1/\sqrt{\mu \epsilon} = c$ we have the usual $F = \mathbf{E} + I c \mathbf{B}$. Assuming a phasor solution of

\begin{aligned}\tilde{F} = F_0 e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)}\end{aligned} \hspace{\stretch{1}}(1.3.7)

where $F_0$ is allowed to be complex, and the actual field is obtained by taking the real part

\begin{aligned}F = \text{Real} \tilde{F} = \text{Real}(F_0) \cos(\mathbf{k} \cdot \mathbf{x} - \omega t)-\text{Imag}(F_0) \sin(\mathbf{k} \cdot \mathbf{x} - \omega t).\end{aligned} \hspace{\stretch{1}}(1.3.8)

Note carefully that we are using a scalar imaginary $i$, as well as the multivector (pseudoscalar) $I$, despite the fact that both have the square to scalar minus one property.

We now seek the constraints on $\mathbf{k}$, $\omega$, and $F_0$ that allow this to be a solution to 1.2.5

\begin{aligned}0 = \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \tilde{F}.\end{aligned} \hspace{\stretch{1}}(1.3.9)

As usual in the non-geometric algebra treatment, we observe that any such solution $F$ to Maxwell’s equation is also a wave equation solution. In GA we can do so by right multiplying an operator that has a conjugate form,

\begin{aligned}\begin{aligned}0 &= \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \tilde{F} \\ &= \left(\boldsymbol{\nabla} - \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) \tilde{F} \\ &=\left( \boldsymbol{\nabla}^2 - \mu\epsilon \frac{\partial^2}{\partial t^2} \right) \tilde{F} \\ &=\left( \boldsymbol{\nabla}^2 - \frac{1}{{v^2}} \frac{\partial^2}{\partial t^2} \right) \tilde{F},\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.10)

where $v = 1/\sqrt{\mu\epsilon}$ is the speed of the wave described by this solution.

Inserting the exponential form of our assumed solution 1.3.7 we find

\begin{aligned}0 = -(\mathbf{k}^2 - \omega^2/v^2) F_0 e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)},\end{aligned} \hspace{\stretch{1}}(1.3.11)

which implies that the wave number vector $\mathbf{k}$ and the angular frequency $\omega$ are related by

\begin{aligned}v^2 \mathbf{k}^2 = \omega^2.\end{aligned} \hspace{\stretch{1}}(1.3.12)

Our assumed solution must also satisfy the first order system 1.3.9

\begin{aligned}\begin{aligned}0 &= \left(\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \frac{\partial {}}{\partial {t}} \right) F_0e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)} \\ &=i\left(\mathbf{e}_m k_m - \frac{\omega}{v}\right) F_0e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)} \\ &=i k ( \hat{\mathbf{k}} - 1 ) F_0 e^{i (\mathbf{k} \cdot \mathbf{x} - \omega t)}.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.13)

The constraints on $F_0$ must then be given by

\begin{aligned}0 = ( \hat{\mathbf{k}} - 1 ) F_0.\end{aligned} \hspace{\stretch{1}}(1.3.14)

With

\begin{aligned}F_0 = \mathbf{E}_0 + I v \mathbf{B}_0,\end{aligned} \hspace{\stretch{1}}(1.3.15)

we must then have all grades of the multivector equation equal to zero

\begin{aligned}0 = ( \hat{\mathbf{k}} - 1 ) \left(\mathbf{E}_0 + I v \mathbf{B}_0\right).\end{aligned} \hspace{\stretch{1}}(1.3.16)

Writing out all the geometric products, noting that $I$ commutes with all of $\hat{\mathbf{k}}$, $\mathbf{E}_0$, and $\mathbf{B}_0$ and employing the identity $\mathbf{a} \mathbf{b} = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \wedge \mathbf{b}$ we have

\begin{aligned}\begin{array}{l l l l l}0 &= \hat{\mathbf{k}} \cdot \mathbf{E}_0 & - \mathbf{E}_0 & + \hat{\mathbf{k}} \wedge \mathbf{E}_0 & I v \hat{\mathbf{k}} \cdot \mathbf{B}_0 \\ & & + I v \hat{\mathbf{k}} \wedge \mathbf{B}_0 & + I v \mathbf{B}_0 &\end{array}\end{aligned} \hspace{\stretch{1}}(1.3.17)

This is

\begin{aligned}0 = \hat{\mathbf{k}} \cdot \mathbf{E}_0 \end{aligned} \hspace{\stretch{1}}(1.3.18a)

\begin{aligned}\mathbf{E}_0 =- \hat{\mathbf{k}} \times v \mathbf{B}_0 \end{aligned} \hspace{\stretch{1}}(1.3.18b)

\begin{aligned}v \mathbf{B}_0 = \hat{\mathbf{k}} \times \mathbf{E}_0 \end{aligned} \hspace{\stretch{1}}(1.3.18c)

\begin{aligned}0 = \hat{\mathbf{k}} \cdot \mathbf{B}_0.\end{aligned} \hspace{\stretch{1}}(1.3.18d)

This and 1.3.12 describe all the constraints on our phasor that are required for it to be a solution. Note that only one of the two cross product equations in are required because the two are not independent. This can be shown by crossing $\hat{\mathbf{k}}$ with 1.3.18b and using the identity

\begin{aligned}\mathbf{a} \times (\mathbf{a} \times \mathbf{b}) = - \mathbf{a}^2 \mathbf{b} + a (\mathbf{a} \cdot \mathbf{b}).\end{aligned} \hspace{\stretch{1}}(1.3.19)

One can find easily that 1.3.18b and 1.3.18c provide the same relationship between the $\mathbf{E}_0$ and $\mathbf{B}_0$ components of $F_0$. Writing out the complete expression for $F_0$ we have

\begin{aligned}\begin{aligned}F_0 &= \mathbf{E}_0 + I v \mathbf{B}_0 \\ &=\mathbf{E}_0 + I \hat{\mathbf{k}} \times \mathbf{E}_0 \\ &=\mathbf{E}_0 + \hat{\mathbf{k}} \wedge \mathbf{E}_0.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.20)

Since $\hat{\mathbf{k}} \cdot \mathbf{E}_0 = 0$, this is

\begin{aligned}F_0 = (1 + \hat{\mathbf{k}}) \mathbf{E}_0.\end{aligned} \hspace{\stretch{1}}(1.3.21)

Had we been clever enough this could have been deduced directly from the 1.3.14 directly, since we require a product that is killed by left multiplication with $\hat{\mathbf{k}} - 1$. Our complete plane wave solution to Maxwell’s equation is therefore given by

\begin{aligned}\begin{aligned}F &= \text{Real}(\tilde{F}) = \mathbf{E} + \frac{I}{\sqrt{\mu\epsilon}} \mathbf{B} \\ \tilde{F} &= (1 \pm \hat{\mathbf{k}}) \mathbf{E}_0 e^{i (\mathbf{k} \cdot \mathbf{x} \mp \omega t)} \\ 0 &= \hat{\mathbf{k}} \cdot \mathbf{E}_0 \\ \mathbf{k}^2 &= \omega^2 \mu \epsilon.\end{aligned}\end{aligned} \hspace{\stretch{1}}(1.3.22)

# References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] D.J. Griffith. Introduction to Electrodynamics. Prentice-Hall, 1981.

## PHY456H1F: Quantum Mechanics II. Lecture L24 (Taught by Prof J.E. Sipe). 3D Scattering cross sections (cont.)

Posted by peeterjoot on December 5, 2011

[Click here for a PDF of this post with nicer formatting and figures if the post had any (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]

# Disclaimer.

Peeter’s lecture notes from class. May not be entirely coherent.

# Scattering cross sections.

Recall that we are studing the case of a potential that is zero outside of a fixed bound, $V(\mathbf{r}) = 0$ for $r > r_0$, as in figure (\ref{fig:qmTwoL24:qmTwoL22fig5})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.3\textheight]{qmTwoL22fig5}
\caption{Bounded potential.}
\end{figure}

and were looking for solutions to Schr\”{o}dinger’s equation

\begin{aligned}-\frac{\hbar^2}{2\mu} \boldsymbol{\nabla}^2\psi_\mathbf{k}(\mathbf{r})+ V(\mathbf{r})\psi_\mathbf{k}(\mathbf{r})=\frac{\hbar^2 \mathbf{k}^2}{2 \mu}\psi_\mathbf{k}(\mathbf{r}),\end{aligned} \hspace{\stretch{1}}(2.1)

in regions of space, where $r > r_0$ is very large. We found

\begin{aligned}\psi_\mathbf{k}(\mathbf{r}) \sim e^{i \mathbf{k} \cdot \mathbf{r}} + \frac{e^{i k r}}{r} f_\mathbf{k}(\theta, \phi).\end{aligned} \hspace{\stretch{1}}(2.2)

For $r \le r_0$ this will be something much more complicated.

To study scattering we’ll use the concept of probability flux as in electromagnetism

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{j} + \dot{\rho} = 0\end{aligned} \hspace{\stretch{1}}(2.3)

Using

\begin{aligned}\psi(\mathbf{r}, t) =\psi_\mathbf{k}(\mathbf{r})^{*}\psi_\mathbf{k}(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(2.4)

we find

\begin{aligned}\mathbf{j}(\mathbf{r}, t) = \frac{\hbar}{2 \mu i} \Bigl(\psi_\mathbf{k}(\mathbf{r})^{*} \boldsymbol{\nabla} \psi_\mathbf{k}(\mathbf{r})- (\boldsymbol{\nabla} \psi_\mathbf{k}^{*}(\mathbf{r})) \psi_\mathbf{k}(\mathbf{r})\Bigr)\end{aligned} \hspace{\stretch{1}}(2.5)

when

\begin{aligned}-\frac{\hbar^2}{2\mu} \boldsymbol{\nabla}^2\psi_\mathbf{k}(\mathbf{r})+ V(\mathbf{r})\psi_\mathbf{k}(\mathbf{r})=i \hbar \frac{\partial {\psi_\mathbf{k}(\mathbf{r})}}{\partial {t}}\end{aligned} \hspace{\stretch{1}}(2.6)

In a fashion similar to what we did in the 1D case, let’s suppose that we can write our wave function

\begin{aligned}\psi(\mathbf{r}, t_{\text{initial}}) = \int d^3k \alpha(\mathbf{k}, t_{\text{initial}}) \psi_\mathbf{k}(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(2.7)

and treat the scattering as the scattering of a plane wave front (idealizing a set of wave packets) off of the object of interest as depicted in figure (\ref{fig:qmTwoL24:qmTwoL24fig3})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.3\textheight]{qmTwoL24fig3}
\caption{plane wave front incident on particle}
\end{figure}

We assume that our incoming particles are sufficiently localized in $k$ space as depicted in the idealized representation of figure (\ref{fig:qmTwoL24:qmTwoL24fig4})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.3\textheight]{qmTwoL24fig4}
\caption{k space localized wave packet}
\end{figure}

we assume that $\alpha(\mathbf{k}, t_{\text{initial}})$ is localized.

\begin{aligned}\psi(\mathbf{r}, t_{\text{initial}}) =\int d^3k\left(\alpha(\mathbf{k}, t_{\text{initial}})e^{i k_z z}+\alpha(\mathbf{k}, t_{\text{initial}}) \frac{e^{i k r}}{r} f_\mathbf{k}(\theta, \phi)\right)\end{aligned} \hspace{\stretch{1}}(2.8)

We suppose that

\begin{aligned}\alpha(\mathbf{k}, t_{\text{initial}}) = \alpha(\mathbf{k}) e^{-i \hbar k^2 t_{\text{initial}}/ 2\mu}\end{aligned} \hspace{\stretch{1}}(2.9)

where this is chosen ($\alpha(\mathbf{k}, t_{\text{initial}})$ is built in this fashion) so that this is non-zero for $z$ large in magnitude and negative.

This last integral can be approximated

\begin{aligned}\begin{aligned}\int d^3k\alpha(\mathbf{k}, t_{\text{initial}}) \frac{e^{i k r}}{r} f_\mathbf{k}(\theta, \phi)&\approx\frac{f_{\mathbf{k}_0}(\theta, \phi)}{r}\int d^3k\alpha(\mathbf{k}, t_{\text{initial}}) e^{i k r} \\ &\rightarrow 0\end{aligned}\end{aligned} \hspace{\stretch{1}}(2.10)

This is very much like the 1D case where we found no reflected component for our initial time.

We’ll normally look in a locality well away from the wave front as indicted in figure (\ref{fig:qmTwoL24:qmTwoL24fig5})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.3\textheight]{qmTwoL24fig5}
\caption{point of measurement of scattering cross section}
\end{figure}

There are situations where we do look in the locality of the wave front that has been scattered.

Our income wave is of the form

\begin{aligned}\psi_i = A e^{i k z} e^{-i \hbar k^2 t/2 \mu}\end{aligned} \hspace{\stretch{1}}(2.11)

Here we’ve made the approximation that $k = {\left\lvert{\mathbf{k}}\right\rvert} \sim k_z$. We can calculate the probability current

\begin{aligned}\mathbf{j} = \hat{\mathbf{z}} \frac{\hbar k}{\mu} A\end{aligned} \hspace{\stretch{1}}(2.12)

(notice the $v = p/m$ like term above, with $p = \hbar k$).

For the scattered wave (dropping $A$ factor)

\begin{aligned}\mathbf{j} &=\frac{\hbar}{2 \mu i}\left(f_\mathbf{k}^{*}(\theta, \phi) \frac{e^{-i k r}}{r} \boldsymbol{\nabla} \left(f_\mathbf{k}(\theta, \phi) \frac{e^{i k r}}{r}\right)-\boldsymbol{\nabla} \left(f_\mathbf{k}^{*}(\theta, \phi) \frac{e^{-i k r}}{r}\right)f_\mathbf{k}(\theta, \phi) \frac{e^{i k r}}{r}\right)\\ &\approx\frac{\hbar}{2 \mu i}\left(f_\mathbf{k}^{*}(\theta, \phi) \frac{e^{-i k r}}{r} i k \hat{\mathbf{r}} f_\mathbf{k}(\theta, \phi)\frac{e^{i k r}}{r}-f_\mathbf{k}^{*}(\theta, \phi) \frac{e^{-i k r}}{r} (-i k \hat{\mathbf{r}}) f_\mathbf{k}(\theta, \phi)\frac{e^{i k r}}{r}\right)\end{aligned}

We find that the radial portion of the current density is

\begin{aligned}\hat{\mathbf{r}} \cdot \mathbf{j}&= \frac{\hbar}{2 \mu i} {\left\lvert{f}\right\rvert}^2 \frac{ 2 i k }{r^2} \\ &= \frac{\hbar k}{\mu} \frac{1}{{r^2}} {\left\lvert{f}\right\rvert}^2,\end{aligned}

and the flux through our element of solid angle is

\begin{aligned}\hat{\mathbf{r}} dA \cdot \mathbf{j}&=\frac{\text{probability}}{\text{unit area per time}} \times \text{area} \\ &= \frac{\text{probability}}{\text{unit time}} \\ &=\frac{\hbar k}{\mu} \frac{{\left\lvert{f_\mathbf{k}(\theta, \phi)}\right\rvert}^2}{r^2} r^2 d\Omega \\ &=\frac{\hbar k }{\mu}{\left\lvert{f_\mathbf{k}(\theta, \phi)}\right\rvert}^2 d\Omega \\ &=j_{\text{incoming}}\underbrace{{\left\lvert{f_\mathbf{k}(\theta, \phi)}\right\rvert}^2}_{d\sigma/d\Omega} d\Omega.\end{aligned}

We identify the scattering cross section above

\begin{aligned}\frac{d\sigma}{d\Omega}={\left\lvert{f_\mathbf{k}(\theta, \phi)}\right\rvert}^2\end{aligned} \hspace{\stretch{1}}(2.13)

\begin{aligned}\sigma = \int {\left\lvert{f_\mathbf{k}(\theta, \phi)}\right\rvert}^2 d\Omega\end{aligned} \hspace{\stretch{1}}(2.14)

We’ve been somewhat unrealistic here since we’ve used a plane wave approximation, and can as in figure (\ref{fig:qmTwoL24:qmTwoL24fig6})

\begin{figure}[htp]
\centering
\includegraphics[totalheight=0.3\textheight]{qmTwoL24fig6}
\caption{Plane wave vs packet wave front}
\end{figure}

will actually produce the same answer. For details we are referred to [2] and [3].

## Working towards a solution

We’ve done a bunch of stuff here but are not much closer to a real solution because we don’t actually know what $f_\mathbf{k}$ is.

Let’s write Schr\”{o}dinger

\begin{aligned}-\frac{\hbar^2}{2\mu} \boldsymbol{\nabla}^2\psi_\mathbf{k}(\mathbf{r})+ V(\mathbf{r})\psi_\mathbf{k}(\mathbf{r})=\frac{\hbar^2 \mathbf{k}^2}{2 \mu}\psi_\mathbf{k}(\mathbf{r}),\end{aligned} \hspace{\stretch{1}}(2.15)

\begin{aligned}(\boldsymbol{\nabla}^2 + \mathbf{k}^2)\psi_\mathbf{k}(\mathbf{r})= s(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(2.16)

where

\begin{aligned}s(\mathbf{r}) = \frac{2\mu}{\hbar} V(\mathbf{r}) \psi_\mathbf{k}(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(2.17)

where $s(\mathbf{r})$ is really the particular solution to this differential problem. We want

\begin{aligned}\psi_\mathbf{k}(\mathbf{r}) =\psi_\mathbf{k}^{\text{homogeneous}}(\mathbf{r})+ \psi_\mathbf{k}^{\text{particular}}(\mathbf{r})\end{aligned} \hspace{\stretch{1}}(2.18)

and

\begin{aligned}\psi_\mathbf{k}^{\text{homogeneous}}(\mathbf{r}) = e^{i \mathbf{k} \cdot \mathbf{r}}\end{aligned} \hspace{\stretch{1}}(2.19)

# References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.

[2] A. Messiah, G.M. Temmer, and J. Potter. Quantum mechanics: two volumes bound as one. Dover Publications New York, 1999.

[3] JR Taylor. {\em Scattering Theory: the Quantum Theory of Nonrelativistic Scattering}, volume 1. 1972.

## (INCOMPLETE) Geometry of Maxwell radiation solutions

Posted by peeterjoot on August 18, 2009

# Motivation

We have in GA multiple possible ways to parametrize an oscillatory time dependence for a radiation field.

This was going to be an attempt to systematically solve the resulting eigen-multivector problem, starting with the a $I\hat{\mathbf{z}} \omega t$ exponential time parametrization, but I got stuck part way. Perhaps using a plain old $I \omega t$ would work out better, but I’ve spent more time on this than I want for now.

# Setup. The eigenvalue problem.

Again following Jackson ([1]), we use CGS units. Maxwell’s equation in these units, with $F = \mathbf{E} + I\mathbf{B}/\sqrt{\mu\epsilon}$ is

\begin{aligned}0 &= (\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \partial_0) F \end{aligned} \quad\quad\quad(1)

With an assumed oscillatory time dependence

\begin{aligned}F = \mathcal{F} e^{i\omega t} \end{aligned} \quad\quad\quad(2)

Maxwell’s equation reduces to a multivariable eigenvalue problem

\begin{aligned}\boldsymbol{\nabla} \mathcal{F} &= - \mathcal{F} i \lambda \\ \lambda &= \sqrt{\mu\epsilon} \frac{\omega}{c} \end{aligned} \quad\quad\quad(3)

We have some flexibility in picking the imaginary. As well as a non-geometric imaginary $i$ typically used for a phasor representation where we take real parts of the field, we have additional possibilities, two of which are

\begin{aligned}i &= \hat{\mathbf{x}}\hat{\mathbf{y}}\hat{\mathbf{z}} = I \\ i &= \hat{\mathbf{x}} \hat{\mathbf{y}} = I \hat{\mathbf{z}} \end{aligned} \quad\quad\quad(5)

The first is the spatial pseudoscalar, which commutes with all vectors and bivectors. The second is the unit bivector for the transverse plane, here parametrized by duality using the perpendicular to the plane direction $\hat{\mathbf{z}}$.

Let’s examine the geometry required of the object $\mathcal{F}$ for each of these two geometric modeling choices.

# Using the transverse plane bivector for the imaginary.

Assuming no prior assumptions about $\mathcal{F}$ let’s allow for the possibility of scalar, vector, bivector and pseudoscalar components

\begin{aligned}F = e^{-I\hat{\mathbf{z}} \omega t} ( F_0 + F_1 + F_2 + F_3 ) \end{aligned} \quad\quad\quad(7)

Writing $e^{-I\hat{\mathbf{z}} \omega t} = \cos(\omega t) -I \hat{\mathbf{z}} \sin(\omega t) = C_\omega -I \hat{\mathbf{z}} S_\omega$, an expansion of this product separated into grades is

\begin{aligned}F &= C_\omega F_0 - I S_\omega (\hat{\mathbf{z}} \wedge F_2) \\ &+ C_\omega F_1 - \hat{\mathbf{z}} S_\omega (I F_3) + S_\omega (\hat{\mathbf{z}} \times F_1) \\ &+ C_\omega F_2 - I \hat{\mathbf{z}} S_\omega F_0 - I S_\omega (\hat{\mathbf{z}} \cdot F_2) \\ &+ C_\omega F_3 - I S_\omega (\hat{\mathbf{z}} \cdot F_1) \end{aligned}

By construction $F$ has only vector and bivector grades, so a requirement for zero scalar and pseudoscalar for all $t$ means that we have four immediate constraints (with $\mathbf{n} \perp \hat{\mathbf{z}}$.)

\begin{aligned}F_0 &= 0 & \\ F_3 &= 0 & \\ F_2 &= \hat{\mathbf{z}} \wedge \mathbf{m} \\ F_1 &= \mathbf{n} \end{aligned}

Since we have the flexibility to add or subtract any scalar multiple of $\hat{\mathbf{z}}$ to $\mathbf{m}$ we can write $F_2 = \hat{\mathbf{z}} \mathbf{m}$ where $\mathbf{m} \perp \hat{\mathbf{z}}$. Our field can now be written as just

\begin{aligned}F &= C_\omega \mathbf{n} - I S_\omega (\hat{\mathbf{z}} \wedge \mathbf{n}) \\ &+ C_\omega \hat{\mathbf{z}} \mathbf{m} - I S_\omega (\hat{\mathbf{z}} \cdot (\hat{\mathbf{z}} \mathbf{m})) \end{aligned}

We can similarly require $\mathbf{n} \perp \hat{\mathbf{z}}$, leaving

\begin{aligned}F &= (C_\omega - I \hat{\mathbf{z}} S_\omega ) \mathbf{n} + (C_\omega - I \hat{\mathbf{z}} S_\omega) \mathbf{m} \hat{\mathbf{z}} \end{aligned} \quad\quad\quad(8)

So, just the geometrical constraints give us

\begin{aligned}F &= e^{-I\hat{\mathbf{z}} \omega t}(\mathbf{n} + \mathbf{m} \hat{\mathbf{z}}) \end{aligned} \quad\quad\quad(9)

The first thing to be noted is that this phasor representation utilizing for the imaginary the transverse plane bivector $I\hat{\mathbf{z}}$ cannot be the most general. This representation allows for only transverse fields! This can be seen two ways. Computing the transverse and propagation field components we have

\begin{aligned}F_z &= \frac{1}{{2}}(F + \hat{\mathbf{z}} F \hat{\mathbf{z}}) \\ &= \frac{1}{{2}} e^{-I\hat{\mathbf{z}} \omega t}( \mathbf{n} + \mathbf{m} \hat{\mathbf{z}} + \hat{\mathbf{z}} \mathbf{n} \hat{\mathbf{z}} + \hat{\mathbf{z}} \mathbf{m} \hat{\mathbf{z}} \hat{\mathbf{z}}) \\ &= \frac{1}{{2}} e^{-I\hat{\mathbf{z}} \omega t}( \mathbf{n} + \mathbf{m} \hat{\mathbf{z}} - \mathbf{n} - \mathbf{m} \hat{\mathbf{z}} ) \\ &= 0 \end{aligned}

The computation for the transverse field $F_t = (F - \hat{\mathbf{z}} F \hat{\mathbf{z}})/2$ shows that $F = F_t$ as expected since the propagation component is zero.

Another way to observe this is from the split of $F$ into electric and magnetic field components. From (8) we have

\begin{aligned}\mathbf{E} &= \cos(\omega t) \mathbf{m} + \sin(\omega t) (\hat{\mathbf{z}} \times \mathbf{m}) \\ \mathbf{B} &= \cos(\omega t) (\hat{\mathbf{z}} \times \mathbf{n}) - \sin(\omega t) \mathbf{n} \end{aligned} \quad\quad\quad(10)

The space containing each of the $\mathbf{E}$ and $\mathbf{B}$ vectors lies in the span of the transverse plane. We also see that there’s some potential redundancy in the representation visible here since we have four vectors describing this span $\mathbf{m}$, $\mathbf{n}$, $\hat{\mathbf{z}} \times \mathbf{m}$, and $\hat{\mathbf{z}} \times \mathbf{n}$, instead of just two.

## General wave packet.

If (1) were a scalar equation for $F(\mathbf{x},t)$ it can be readily shown using Fourier transforms the field propagation in time given initial time description of the field is

\begin{aligned}F(\mathbf{x}, t) = \int \left( \frac{1}{{(2\pi)^3}} \int F(\mathbf{x}', 0) e^{i\mathbf{k} \cdot (\mathbf{x}' -\mathbf{x})} d^3 x \right) e^{i c \mathbf{k} t/ \sqrt{\mu\epsilon}} d^3 k \end{aligned} \quad\quad\quad(12)

In traditional complex algebra the vector exponentials would not be well formed. We do not have the problem in the GA formalism, but this does lead to a contraction since the resulting $F(\mathbf{x},t)$ cannot be scalar valued. However, by using this as a motivational tool, and
also using assumed structure for the discrete frequency infinite wavetrain phasor, we can guess that a transverse only (to $z$-axis) wave packet may be described by a single direction variant of the Fourier result above. That is

\begin{aligned}F(\mathbf{x}, t) = \frac{1}{{\sqrt{2\pi}}} \int e^{-I \hat{\mathbf{z}} \omega t} \mathcal{F}(\mathbf{x}, \omega)d\omega \end{aligned} \quad\quad\quad(13)

Since (13) has the same form as the earlier single frequency phasor test solution, we now know that $\mathcal{F}$ is required to anticommute with $\hat{\mathbf{z}}$. Application of Maxwell’s equation to this test solution gives us

\begin{aligned}(\boldsymbol{\nabla} +\sqrt{\mu\epsilon} \partial_0) F(\mathbf{x},t) &=(\boldsymbol{\nabla} +\sqrt{\mu\epsilon} \partial_0) \frac{1}{{\sqrt{2\pi}}} \int \mathcal{F}(\mathbf{x}, \omega)e^{I \hat{\mathbf{z}} \omega t} d\omega \\ &=\frac{1}{{\sqrt{2\pi}}}\int\left(\boldsymbol{\nabla} \mathcal{F} + \mathcal{F} I \hat{\mathbf{z}} \sqrt{\mu\epsilon} \frac{\omega}{c}\right) e^{I \hat{\mathbf{z}} \omega t} d\omega \end{aligned}

This means that $\mathcal{F}$ must satisfy the gradient eigenvalue equation for all $\omega$

\begin{aligned}\boldsymbol{\nabla} \mathcal{F} = -\mathcal{F} I \hat{\mathbf{z}} \sqrt{\mu\epsilon} \frac{\omega}{c} \end{aligned} \quad\quad\quad(14)

Observe that this is the single frequency problem of equation (3), so for mono-directional light we can consider the infinite wave train instead of a wave packet with no loss of generality.

## Applying separation of variables.

While this may not lead to the most general solution to the radiation problem, the transverse only propagation problem is still one of interest. Let’s see where this leads. In order to reduce the scope of the problem by one degree of freedom, let’s split out the $\hat{\mathbf{z}}$ component of the gradient, writing

\begin{aligned}\boldsymbol{\nabla} = \boldsymbol{\nabla}_t + \hat{\mathbf{z}} \partial_z \end{aligned} \quad\quad\quad(15)

Also introduce a product split for separation of variables for the $z$ dependence. That is

\begin{aligned}\mathcal{F} = G(x,y) Z(z) \end{aligned} \quad\quad\quad(16)

Again we are faced with the problem of too many choices for the grades of each of these factors. We can pick one of these, say $Z$, to have only scalar and pseudoscalar grades so that the two factors commute. Then we have

\begin{aligned}(\boldsymbol{\nabla}_t + \boldsymbol{\nabla}_z) \mathcal{F} = (\boldsymbol{\nabla}_t G) Z + \hat{\mathbf{z}} G \partial_z Z = -G Z I \hat{\mathbf{z}} \lambda \end{aligned}

With $Z$ in an algebra isomorphic to the complex numbers, it is necessarily invertible (and commutes with it’s derivative). Similar arguments to the grade fixing for $\mathcal{F}$ show that $G$ has only vector and bivector grades, but does $G$ have the inverse required to do the separation of variables? Let’s blindly suppose that we can do this (and if we can’t we can probably fudge it since we multiply again soon after). With some rearranging we have

\begin{aligned}-\frac{1}{{G}} \hat{\mathbf{z}} (\boldsymbol{\nabla}_t G + G I \hat{\mathbf{z}} \lambda) = (\partial_z Z)\frac{1}{{Z}} = \text{constant} \end{aligned} \quad\quad\quad(17)

We want to separately equate these to a constant. In order to commute these factors we’ve only required that $Z$ have only scalar and pseudoscalar grades, so for the constant let’s pick an arbitrary element in this subspace. That is

\begin{aligned}(\partial_z Z)\frac{1}{{Z}} = \alpha + k I \end{aligned} \quad\quad\quad(18)

The solution for the $Z$ factor in the separation of variables is thus

\begin{aligned}Z \propto e^{(\alpha + k I)z} \end{aligned} \quad\quad\quad(19)

For $G$ the separation of variables gives us

\begin{aligned}\boldsymbol{\nabla}_t G + (G \hat{\mathbf{z}} \lambda + \hat{\mathbf{z}} G k) I + \hat{\mathbf{z}} G \alpha = 0 \end{aligned} \quad\quad\quad(20)

We’ve now reduced the problem to something like a two variable eigenvalue problem, where the differential operator to find eigenvectors for is the transverse gradient $\boldsymbol{\nabla}_t$. We unfortunately have an untidy split of the eigenvalue into left and right hand factors.

While the product $GZ$ was transverse only, we’ve now potentially lost that nice property for $G$ itself, and do not know if $G$ is strictly commuting or anticommuting with $\hat{\mathbf{z}}$. Assuming either possibility for now, we can split this multivector into transverse and propagation direction fields $G = G_t + G_z$

\begin{aligned}G_t &= \frac{1}{{2}}(G - \hat{\mathbf{z}} G \hat{\mathbf{z}}) \\ G_z &= \frac{1}{{2}}(G + \hat{\mathbf{z}} G \hat{\mathbf{z}}) \end{aligned} \quad\quad\quad(21)

With this split, noting that $\hat{\mathbf{z}} G_t = -G_t \hat{\mathbf{z}}$, and $\hat{\mathbf{z}} G_z = G_z \hat{\mathbf{z}}$ a rearrangement of (20) produces

\begin{aligned}(\nabla_t + \hat{\mathbf{z}} ((k-\lambda) I + \alpha)) G_t = -(\nabla_t + \hat{\mathbf{z}} ((k+\lambda) I + \alpha)) G_z \end{aligned} \quad\quad\quad(23)

How do we find the eigen multivectors $G_t$ and $G_z$? A couple possibilities come to mind (perhaps not encompassing all solutions). One is for one of $G_t$ or $G_z$ to be zero, and the other to separately require both halves of (23) equal a constant, very much like separation of variables despite the fact that both of these functions $G_t$ and $G_z$ are functions of $x$ and $y$. The easiest non-trivial path is probably letting both sides of (23) separately equal zero, so that we are left with two independent eigen-multivector problems to solve

\begin{aligned}\nabla_t G_t &= -\hat{\mathbf{z}} ((k-\lambda) I + \alpha)) G_t \\ \nabla_t G_z &= -\hat{\mathbf{z}} ((k+\lambda) I + \alpha)) G_z \end{aligned} \quad\quad\quad(24)

Damn. have to mull this over. Don’t know where to go with it.

# References

[1] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975.

## Space time algebra solutions of the Maxwell equation for discrete frequencies.

Posted by peeterjoot on July 11, 2009

# Motivation

How to obtain solutions to Maxwell’s equations in vacuum is well known. The aim here is to explore the same problem starting with the Geometric Algebra (GA) formalism ([1]) of the Maxwell equation.

\begin{aligned}\nabla F &= J/\epsilon_0 c \\ F &= \nabla \wedge A = \mathbf{E} + i c \mathbf{B} \end{aligned} \quad\quad\quad(1)

A Fourier transformation attack on the equation should be possible, so let’s see what falls out doing so.

## Fourier problem.

Picking an observer bias for the gradient by premultiplying with $\gamma_0$ the vacuum equation for light can therefore also be written as

\begin{aligned}0&= \gamma_0 \nabla F \\ &= \gamma_0 (\gamma^0 \partial_0 + \gamma^k \partial_k) F \\ &= (\partial_0 - \gamma^k \gamma_0 \partial_k) F \\ &= (\partial_0 + \sigma^k \partial_k) F \\ &= \left(\frac{1}{c}\partial_t + \boldsymbol{\nabla} \right) F \\ \end{aligned}

A Fourier transformation of this equation produces

\begin{aligned}0 &= \frac{1}{c} \frac{\partial F}{\partial t}(\mathbf{k},t) + \frac{1}{(\sqrt{2\pi})^3} \int \sigma^m \partial_m F(\mathbf{x},t) e^{-i \mathbf{k} \cdot \mathbf{x}} d^3 x \end{aligned}

and with a single integration by parts one has

\begin{aligned}0&= \frac{1}{c} \frac{\partial F}{\partial t}(\mathbf{k},t) - \frac{1}{(\sqrt{2\pi})^3} \int \sigma^m F(\mathbf{x},t) (-i k_m) e^{-i \mathbf{k} \cdot \mathbf{x}} d^3 x \\ &= \frac{1}{c} \frac{\partial F}{\partial t}(\mathbf{k},t) + \frac{1}{(\sqrt{2\pi})^3} \int \mathbf{k} F(\mathbf{x},t) i e^{-i \mathbf{k} \cdot \mathbf{x}} d^3 x \\ &= \frac{1}{c} \frac{\partial F}{\partial t}(\mathbf{k},t) + i \mathbf{k} \hat{F}(\mathbf{k},t) \end{aligned}

The flexibility to employ the pseudoscalar as the imaginary $i = \gamma_0 \gamma_1 \gamma_2 \gamma_3$ has been employed above, so it should be noted that pseudoscalar commutation with Dirac bivectors was implied above, but also that we do not have the flexibility to commute $\mathbf{k}$ with $F$.

Having done this, the problem to solve is now Maxwell’s vacuum equation in the frequency domain

\begin{aligned}\frac{\partial F}{\partial t}(\mathbf{k},t) = -i c \mathbf{k} \hat{F}(\mathbf{k},t) \end{aligned}

Introducing an angular frequency (spatial) bivector, and its vector dual

\begin{aligned}\Omega &= -i c \mathbf{k} \\ \boldsymbol{\omega} &= c \mathbf{k} \end{aligned}

This becomes

\begin{aligned}\hat{F}' = \Omega F \end{aligned} \quad\quad\quad(5)

With solution

\begin{aligned}\hat{F} = e^{\Omega t} \hat{F}(\mathbf{k},0) \end{aligned}

Differentiation with respect to time verifies that the ordering of the terms is correct and this does in fact solve (5). This is something we have to be careful of due to the possibility of non-commuting variables.

Back substitution into the inverse transform now supplies the time evolution of the field given the initial time specification

\begin{aligned}F(\mathbf{x},t)&= \frac{1}{(\sqrt{2\pi})^3} \int e^{\Omega t} \hat{F}(\mathbf{k},0) e^{i \mathbf{k} \cdot \mathbf{x}} d^3 k \\ &= \frac{1}{(2\pi)^3} \int e^{\Omega t} \left( \int {F}(\mathbf{x}',0) e^{-i \mathbf{k} \cdot \mathbf{x}'} d^3 x' \right) e^{i \mathbf{k} \cdot \mathbf{x}} d^3 k \end{aligned}

Observe that Pseudoscalar exponentials commute with the field because $i$ commutes with spatial vectors and itself

\begin{aligned}F e^{i\theta}&= (\mathbf{E} + i c \mathbf{B}) (C + iS) \\ &=C (\mathbf{E} + i c \mathbf{B})+ S (\mathbf{E} + i c \mathbf{B}) i \\ &=C (\mathbf{E} + i c \mathbf{B})+ S i (\mathbf{E} + i c \mathbf{B}) \\ &=e^{i\theta} F \end{aligned}

This allows the specifics of the initial time conditions to be suppressed

\begin{aligned}F(\mathbf{x},t) &= \int d^3 k e^{\Omega t} e^{i \mathbf{k} \cdot \mathbf{x}} \int \frac{1}{(2\pi)^3} {F}(\mathbf{x}',0) e^{-i \mathbf{k} \cdot \mathbf{x}'} d^3 x' \end{aligned}

The interior integral has the job of a weighting function over plane wave solutions, and this can be made explicit writing

\begin{aligned}D(\mathbf{k}) &= \frac{1}{(2\pi)^3} \int {F}(\mathbf{x}',0) e^{-i \mathbf{k} \cdot \mathbf{x}'} d^3 x' \\ F(\mathbf{x},t) &= \int e^{\Omega t} e^{i \mathbf{k} \cdot \mathbf{x}} D(\mathbf{k}) d^3 k \end{aligned}

Many assumptions have been made here, not the least of which was a requirement for the Fourier transform of a bivector valued function to be meaningful, and have an inverse. It is therefore reasonable to verify that this weighted plane wave result is in fact a solution to the original Maxwell vacuum equation. Differentiation verifies that things are okay so far

\begin{aligned}\gamma_0 \nabla F(\mathbf{x},t)&=\left(\frac{1}{c}\partial_t + \boldsymbol{\nabla} \right)\int e^{\Omega t} e^{i \mathbf{k} \cdot \mathbf{x}} D(\mathbf{k}) d^3 k \\ &=\int \left(\frac{1}{c}\Omega e^{\Omega t} + \sigma^m e^{\Omega t} i k_m \right) e^{i \mathbf{k} \cdot \mathbf{x}} D(\mathbf{k}) d^3 k \\ &=\int \left(\frac{1}{c}(-i \mathbf{k} c) + i \mathbf{k} \right) e^{\Omega t} e^{i \mathbf{k} \cdot \mathbf{x}} D(\mathbf{k}) d^3 k \\ &= 0 \quad\quad\quad\square \end{aligned}

The fact that it the integral has zero gradient does not mean that it is a bivector, so there must also be at least also be restrictions on the grades of $D(\mathbf{k})$.

To simplify discussion, let’s discretize the integral writing

\begin{aligned}D(\mathbf{k}') = D_\mathbf{k} \delta^3 (\mathbf{k} - \mathbf{k}') \end{aligned}

So we have

\begin{aligned}F(\mathbf{x},t)&= \int e^{\Omega t} e^{i \mathbf{k}' \cdot \mathbf{x}} D(\mathbf{k}') d^3 k' \\ &= \int e^{\Omega t} e^{i \mathbf{k}' \cdot \mathbf{x}} D_\mathbf{k} \delta^3(\mathbf{k} - \mathbf{k}') d^3 k' \\ \end{aligned}

This produces something planewave-ish

\begin{aligned}F(\mathbf{x},t) &= e^{\Omega t} e^{i \mathbf{k} \cdot \mathbf{x}} D_\mathbf{k} \end{aligned} \quad\quad\quad(10)

Observe that at $t=0$ we have

\begin{aligned}F(\mathbf{x},0)&= e^{i \mathbf{k} \cdot \mathbf{x}} D_\mathbf{k} \\ &= (\cos (\mathbf{k} \cdot \mathbf{x}) + i \sin(\mathbf{k} \cdot \mathbf{x})) D_\mathbf{k} \\ \end{aligned}

There is therefore a requirement for $D_\mathbf{k}$ to be either a spatial vector or its dual, a spatial bivector. For example taking $D_k$ to be a spatial vector we can then identify the electric and magnetic components of the field

\begin{aligned}\mathbf{E}(\mathbf{x},0) &= \cos (\mathbf{k} \cdot \mathbf{x}) D_\mathbf{k} \\ c \mathbf{B}(\mathbf{x},0) &= \sin (\mathbf{k} \cdot \mathbf{x}) D_\mathbf{k} \end{aligned}

and if $D_k$ is taken to be a spatial bivector, this pair of identifications would be inverted.

Considering (10) at $\mathbf{x}=0$, we have

\begin{aligned}F(0, t)&= e^{\Omega t} D_\mathbf{k} \\ &= (\cos({\left\lvert{\Omega}\right\rvert} t) + \hat{\Omega} \sin({\left\lvert{\Omega}\right\rvert} t)) D_\mathbf{k} \\ &= (\cos({\left\lvert{\Omega}\right\rvert} t) - i \hat{\mathbf{k}} \sin({\left\lvert{\Omega}\right\rvert} t)) D_\mathbf{k} \\ \end{aligned}

If $D_\mathbf{k}$ is first assumed to be a spatial vector, then $F$ would have a pseudoscalar component if $D_\mathbf{k}$ has any component parallel to $\hat{\mathbf{k}}$.

\begin{aligned}D_\mathbf{k} \in \span\{\sigma^m\} \implies D_\mathbf{k} \cdot \hat{\mathbf{k}} = 0 \end{aligned} \quad\quad\quad(11)

\begin{aligned}D_\mathbf{k} \in \span\{\sigma^a \wedge \sigma^b\} \implies D_\mathbf{k} \cdot (i\hat{\mathbf{k}}) = 0 \end{aligned} \quad\quad\quad(12)

Since we can convert between the spatial vector and bivector cases using a duality transformation, there may not appear to be any loss of generality imposing a spatial vector restriction on $D_\mathbf{k}$, at least in this current free case. However, an attempt to do so leads to trouble. In particular, this leads to collinear electric and magnetic fields, and thus the odd seeming condition where the field energy density is non-zero but the field momentum density (Poynting vector $\mathbf{P} \propto \mathbf{E} \times \mathbf{B}$) is zero. In retrospect being forced down the path of including both grades is not unreasonable, especially since this gives $D_\mathbf{k}$ precisely the form of the field itself $F = \mathbf{E} + i c \mathbf{B}$.

# Electric and Magnetic field split.

With the basic form of the Maxwell vacuum solution determined, we are now ready to start extracting information from the solution and making comparisons with the more familiar vector form. To start doing the phasor form of the fundamental solution can be expanded explicitly in terms of two arbitrary spatial parametrization vectors $\mathbf{E}_\mathbf{k}$ and $\mathbf{B}_\mathbf{k}$.

\begin{aligned}F &= e^{-i\boldsymbol{\omega} t} e^{i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) \end{aligned} \quad\quad\quad(13)

Whether these parametrization vectors have any relation to electric and magnetic fields respectively will have to be determined, but making that assumption for now to label these uniquely doesn’t seem unreasonable.

From (13) we can compute the electric and magnetic fields by the conjugate relations (25). Our conjugate is

\begin{aligned}F^\dagger&= (\mathbf{E}_\mathbf{k} - i c \mathbf{B}_\mathbf{k}) e^{-i \mathbf{k} \cdot \mathbf{x}} e^{i\boldsymbol{\omega} t} \\ &=e^{-i\boldsymbol{\omega} t}e^{-i \mathbf{k} \cdot \mathbf{x}}(\mathbf{E}_\mathbf{k} - i c \mathbf{B}_\mathbf{k}) \end{aligned}

Thus for the electric field

\begin{aligned}F + F^\dagger&=e^{-i\boldsymbol{\omega} t} \left( e^{i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k})+e^{-i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} - i c \mathbf{B}_\mathbf{k})\right) \\ &=e^{-i\boldsymbol{\omega} t} \left( 2 \cos(\mathbf{k} \cdot \mathbf{x}) \mathbf{E}_\mathbf{k}+ i c (2 i) \sin(\mathbf{k} \cdot \mathbf{x}) \mathbf{B}_\mathbf{k}\right) \\ &=2 \cos(\omega t) \left( \cos(\mathbf{k} \cdot \mathbf{x}) \mathbf{E}_\mathbf{k}- c \sin(\mathbf{k} \cdot \mathbf{x}) \mathbf{B}_\mathbf{k}\right) \\ &+ 2\sin(\omega t)\hat{\mathbf{k}} \times\left( \cos(\mathbf{k} \cdot \mathbf{x}) \mathbf{E}_\mathbf{k}- c \sin(\mathbf{k} \cdot \mathbf{x}) \mathbf{B}_\mathbf{k}\right) \\ \end{aligned}

So for the electric field $\mathbf{E} = \frac{1}{2}(F + F^\dagger)$ we have

\begin{aligned}\mathbf{E} &=\left( \cos(\omega t) + \sin(\omega t) \hat{\mathbf{k}} \times \right)\left( \cos(\mathbf{k} \cdot \mathbf{x}) \mathbf{E}_\mathbf{k}- c \sin(\mathbf{k} \cdot \mathbf{x}) \mathbf{B}_\mathbf{k}\right) \end{aligned} \quad\quad\quad(14)

Similarly for the magnetic field we have

\begin{aligned}F - F^\dagger&=e^{-i\boldsymbol{\omega} t} \left( e^{i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k})-e^{-i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} - i c \mathbf{B}_\mathbf{k})\right) \\ &=e^{-i\boldsymbol{\omega} t} \left( 2 i \sin(\mathbf{k} \cdot \mathbf{x}) \mathbf{E}_\mathbf{k}+ 2 i c \cos(\mathbf{k} \cdot \mathbf{x}) \mathbf{B}_\mathbf{k}\right) \\ \end{aligned}

This gives $c \mathbf{B} = \frac{1}{2i}(F - F^\dagger)$ we have

\begin{aligned}c \mathbf{B} &=\left( \cos(\omega t) + \sin(\omega t) \hat{\mathbf{k}} \times \right)\left( \sin(\mathbf{k} \cdot \mathbf{x}) \mathbf{E}_\mathbf{k}+ c \cos(\mathbf{k} \cdot \mathbf{x}) \mathbf{B}_\mathbf{k}\right) \end{aligned} \quad\quad\quad(15)

Observe that the action of the time dependent phasor has been expressed, somewhat abusively and sneakily, in a scalar plus cross product operator form. The end result, when applied to a vector perpendicular to $\hat{\mathbf{k}}$, is still a vector

\begin{aligned}e^{-i\boldsymbol{\omega} t} \mathbf{a}&=\left( \cos(\omega t) + \sin(\omega t) \hat{\mathbf{k}} \times \right) \mathbf{a} \end{aligned}

Also observe that the Hermitian conjugate split of the total field bivector $F$ produces vectors $\mathbf{E}$ and $\mathbf{B}$, not phasors. There is no further need to take real or imaginary parts nor treat the phasor (13) as an artificial mathematical construct used for convenience only.

## Polar Form.

Suppose an explicit polar form is introduced for the plane vectors $\mathbf{E}_\mathbf{k}$, and $\mathbf{B}_\mathbf{k}$. Let

\begin{aligned}\mathbf{E}_\mathbf{k} &= E {\hat{\mathbf{E}}_k} \\ \mathbf{B}_\mathbf{k} &= B {\hat{\mathbf{E}}_k} e^{i\hat{\mathbf{k}} \theta} \end{aligned}

Then for the field we have

\begin{aligned}F &= e^{-i\boldsymbol{\omega} t} e^{i \mathbf{k} \cdot \mathbf{x}} (E + i c B e^{-i\hat{\mathbf{k}} \theta}) \hat{\mathbf{E}}_k \end{aligned} \quad\quad\quad(16)

For the conjugate

\begin{aligned}F^\dagger&=\hat{\mathbf{E}}_k(E - i c B e^{i\hat{\mathbf{k}} \theta})e^{-i \mathbf{k} \cdot \mathbf{x}}e^{i\boldsymbol{\omega} t} \\ &=e^{-i\boldsymbol{\omega} t} e^{-i \mathbf{k} \cdot \mathbf{x}} (E - i c B e^{-i\hat{\mathbf{k}} \theta}) \hat{\mathbf{E}}_k \end{aligned}

So, in the polar form we have for the electric, and magnetic fields

\begin{aligned}\mathbf{E} &= e^{-i\boldsymbol{\omega} t} (E \cos(\mathbf{k} \cdot \mathbf{x}) - c B \sin(\mathbf{k} \cdot \mathbf{x}) e^{-i \hat{\mathbf{k}}\theta}) \hat{\mathbf{E}}_k \\ c \mathbf{B} &= e^{-i\boldsymbol{\omega} t} (E \sin(\mathbf{k} \cdot \mathbf{x}) + c B \cos(\mathbf{k} \cdot \mathbf{x}) e^{-i \hat{\mathbf{k}}\theta}) \hat{\mathbf{E}}_k \end{aligned} \quad\quad\quad(17)

Observe when $\theta$ is an integer multiple of $\pi$, $\mathbf{E}$ and $\mathbf{B}$ are colinear, having the zero Poynting vector mentioned previously.
Now, for arbitrary $\theta$ it does not appear that there is any inherent perpendicularity between the electric and magnetic fields. It is common
to read of light being the propagation of perpendicular fields, both perpendicular to the propagation direction. We have perpendicularity to the
propagation direction by virtue of requiring that the field be a (Dirac) bivector, but it does not look like the solution requires any inherent perpendicularity for the field components. It appears that a normal triplet of field vectors and propagation directions must actually be a special case.
Intuition says that this freedom to pick different magnitude or angle between $\mathbf{E}_\mathbf{k}$ and $\mathbf{B}_\mathbf{k}$ in the plane perpendicular to the transmission direction may correspond to different mixes of linear, circular, and elliptic polarization, but this has to be confirmed.

Working towards confirming (or disproving) this intuition, lets find the constraints on the fields that lead to normal electric and magnetic fields. This should follow by taking dot products

\begin{aligned}\mathbf{E} \cdot \mathbf{B} c&=\left\langle{e^{-i\boldsymbol{\omega} t} (E \cos(\mathbf{k} \cdot \mathbf{x}) - c B \sin(\mathbf{k} \cdot \mathbf{x}) e^{-i \hat{\mathbf{k}}\theta}) \hat{\mathbf{E}}_k\hat{\mathbf{E}}_ke^{i\boldsymbol{\omega} t} (E \sin(\mathbf{k} \cdot \mathbf{x}) + c B \cos(\mathbf{k} \cdot \mathbf{x}) e^{i \hat{\mathbf{k}}\theta})}\right\rangle \\ &=\left\langle{(E \cos(\mathbf{k} \cdot \mathbf{x}) - c B \sin(\mathbf{k} \cdot \mathbf{x}) e^{-i \hat{\mathbf{k}}\theta})(E \sin(\mathbf{k} \cdot \mathbf{x}) + c B \cos(\mathbf{k} \cdot \mathbf{x}) e^{i \hat{\mathbf{k}}\theta})}\right\rangle \\ &=(E^2 - c^2 B^2) \cos(\mathbf{k} \cdot \mathbf{x}) \sin(\mathbf{k} \cdot \mathbf{x})+ c E B\left\langle{\cos^2(\mathbf{k} \cdot \mathbf{x}) e^{i \hat{\mathbf{k}} \theta}-\sin^2(\mathbf{k} \cdot \mathbf{x}) e^{-i \hat{\mathbf{k}} \theta}}\right\rangle \\ &=(E^2 - c^2 B^2) \cos(\mathbf{k} \cdot \mathbf{x}) \sin(\mathbf{k} \cdot \mathbf{x})+ c E B \cos(\theta) ( \cos^2(\mathbf{k} \cdot \mathbf{x}) -\sin^2(\mathbf{k} \cdot \mathbf{x}) ) \\ &=(E^2 - c^2 B^2) \cos(\mathbf{k} \cdot \mathbf{x}) \sin(\mathbf{k} \cdot \mathbf{x})+ c E B \cos(\theta) ( \cos^2(\mathbf{k} \cdot \mathbf{x}) -\sin^2(\mathbf{k} \cdot \mathbf{x}) ) \\ &=\frac{1}{2} (E^2 - c^2 B^2) \sin(2 \mathbf{k} \cdot \mathbf{x})+ c E B \cos(\theta) \cos(2 \mathbf{k} \cdot \mathbf{x}) \\ \end{aligned}

The only way this can be zero for any $\mathbf{x}$ is if the left and right terms are separately zero, which means

\begin{aligned}{\left\lvert{\mathbf{E}_k}\right\rvert} &= c {\left\lvert{\mathbf{B}_k}\right\rvert} \\ \theta &= \frac{\pi}{2} + n \pi \end{aligned}

This simplifies the phasor considerably, leaving

\begin{aligned}E + i c B e^{-i\hat{\mathbf{k}} \theta}&=E(1 + i (\mp i\hat{\mathbf{k}} )) \\ &=E(1 \pm \hat{\mathbf{k}}) \end{aligned}

So the field is just

\begin{aligned}F = e^{-i \boldsymbol{\omega} t} e^{i \mathbf{k} \cdot \mathbf{x}} (1 \pm \hat{\mathbf{k}}) \mathbf{E}_\mathbf{k} \end{aligned}

Using this, and some regrouping, a calculation of the field components yields

\begin{aligned}\mathbf{E} &= e^{i \hat{\mathbf{k}}( \pm \mathbf{k} \cdot \mathbf{x} -\omega t )} \mathbf{E}_\mathbf{k} \\ c \mathbf{B} &= \pm e^{i \hat{\mathbf{k}}( \pm \mathbf{k} \cdot \mathbf{x} -\omega t )} i \mathbf{k} \mathbf{E}_\mathbf{k} \end{aligned}

Observe that $i\mathbf{k}$ rotates any vector in the plane perpendicular to $\hat{\mathbf{k}}$ by 90 degrees, so we have here $c \mathbf{B} = \pm \hat{\mathbf{k}} \times \mathbf{E}$, the coupling constraint on the fields for linearly polarized plane waves. Without the constraint $\mathbf{E} \cdot \mathbf{B} = 0$, it appears that we cannot have such a simple coupling between the field components.

# Energy and momentum for the phasor

To calculate the field energy density we can work with the two fields of equations (17), or work with the phasor (13) directly. From the phasor and the energy-momentum four vector (28) we have for the energy density

\begin{aligned}U &= T(\gamma_0) \cdot \gamma_0 \\ &= \frac{-\epsilon_0}{2}\left\langle F \gamma_0 F \gamma_0 \right\rangle \\ &= \frac{-\epsilon_0}{2}\left\langle{e^{-i\boldsymbol{\omega} t} e^{i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) \gamma_0 e^{-i\boldsymbol{\omega} t} e^{i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) \gamma_0 }\right\rangle \\ &= \frac{-\epsilon_0}{2}\left\langle{e^{-i\boldsymbol{\omega} t} e^{i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) (\gamma_0)^2 e^{-i\boldsymbol{\omega} t} e^{-i \mathbf{k} \cdot \mathbf{x}} (-\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) }\right\rangle \\ &= \frac{-\epsilon_0}{2}\left\langle{e^{-i\boldsymbol{\omega} t} (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) e^{-i\boldsymbol{\omega} t} (-\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) }\right\rangle \\ &= \frac{\epsilon_0}{2}\left\langle (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) (\mathbf{E}_\mathbf{k} - i c \mathbf{B}_\mathbf{k}) \right\rangle \\ &= \frac{\epsilon_0}{2} \left( (\mathbf{E}_k)^2 + c^2 (\mathbf{B}_\mathbf{k})^2\right) + {c \epsilon_0} \left\langle i \mathbf{E}_\mathbf{k} \wedge \mathbf{B}_\mathbf{k} \right\rangle \\ &= \frac{\epsilon_0}{2} \left( (\mathbf{E}_k)^2 + c^2 (\mathbf{B}_\mathbf{k})^2\right) + {c \epsilon_0} \left\langle \mathbf{B}_\mathbf{k} \times \mathbf{E}_\mathbf{k} \right\rangle \\ \end{aligned}

Quite anticlimactically we have for the energy the sum of the energies associated with the parametrization constants, lending some justification for the initial choice to label these as electric and magnetic fields

\begin{aligned}U = \frac{\epsilon_0}{2} \left( (\mathbf{E}_k)^2 + c^2 (\mathbf{B}_\mathbf{k})^2\right) \end{aligned}

For the momentum, we want the difference of $F F^\dagger$, and $F^\dagger F$

\begin{aligned}F F^\dagger &= e^{-i\boldsymbol{\omega} t} e^{i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) (\mathbf{E}_\mathbf{k} - i c \mathbf{B}_\mathbf{k}) e^{-i \mathbf{k} \cdot \mathbf{x}} e^{i\boldsymbol{\omega} t} \\ &= (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) (\mathbf{E}_\mathbf{k} - i c \mathbf{B}_\mathbf{k}) \\ &= (\mathbf{E}_\mathbf{k})^2 + c^2 (\mathbf{B}_\mathbf{k})^2 - 2 c \mathbf{B}_\mathbf{k} \times \mathbf{E}_\mathbf{k} \end{aligned}

\begin{aligned}F F^\dagger &= (\mathbf{E}_\mathbf{k} - i c \mathbf{B}_\mathbf{k}) e^{-i \mathbf{k} \cdot \mathbf{x}} e^{i\boldsymbol{\omega} t} e^{-i\boldsymbol{\omega} t} e^{i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) \\ &= (\mathbf{E}_\mathbf{k} - i c \mathbf{B}_\mathbf{k}) (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) \\ &= (\mathbf{E}_\mathbf{k})^2 + c^2 (\mathbf{B}_\mathbf{k})^2 + 2 c \mathbf{B}_\mathbf{k} \times \mathbf{E}_\mathbf{k} \end{aligned}

So we have for the momentum, also anticlimactically

\begin{aligned}\mathbf{P} = \frac{1}{c} T(\gamma_0) \wedge \gamma_0 = \epsilon_0 \mathbf{E}_\mathbf{k} \times \mathbf{B}_\mathbf{k} \end{aligned}

# Followup.

Well, that’s enough for one day. Understanding how to express circular and eliptic polarization is one of the logical next steps. I seem to recall from Susskind’s QM lectures that these can be considered superpositions of linearly polarized waves, so examining a sum of two co-directionally propagating fields would seem to be in order. Also there ought to be a more natural way to express the perpendicularity requirement for the field and the propagation direction. The fact that the field components and propagation direction when all multiplied is proportional to the spatial pseudoscalar can probably be utilized to tidy this up and also produce a form that allows for simpler summation of fields in different propagation directions. It also seems reasonable to consider a planar Fourier decomposition of the field components, perhaps framing the superposition of multiple fields in that context.

# Appendix. Background details.

## Conjugate split

The Hermitian conjugate is defined as

\begin{aligned}A^\dagger = \gamma_0 \tilde{A} \gamma_0 \end{aligned}

The conjugate action on a multivector product is straightforward to calculate

\begin{aligned}(A B)^\dagger&= \gamma_0 (A B)^{\tilde{}} \gamma_0 \\ &= \gamma_0 \tilde{B} \tilde{A} \gamma_0 \\ &= \gamma_0 \tilde{B} {\gamma_0}^2 \tilde{A} \gamma_0 \\ &= B^\dagger A^\dagger \end{aligned}

For a spatial vector Hermitian conjugation leaves the vector unaltered

\begin{aligned}\mathbf{a}&= \gamma_0 (\gamma_k \gamma_0)^{\tilde{}} a^k \gamma_0 \\ &= \gamma_0 (\gamma_0 \gamma_k) a^k \gamma_0 \\ &= \gamma_k a^k \gamma_0 \\ &= \mathbf{a} \end{aligned}

But the pseudoscalar is negated

\begin{aligned}i^\dagger&=\gamma_0 \tilde{i} \gamma_0 \\ &=\gamma_0 i \gamma_0 \\ &=-\gamma_0 \gamma_0 i \\ &=- i \\ \end{aligned}

This allows for a split by conjugation of the field into its electric and magnetic field components.

\begin{aligned}F^\dagger&= -\gamma_0 ( \mathbf{E} + i c \mathbf{B}) \gamma_0 \\ &= -\gamma_0^2 ( -\mathbf{E} + i c \mathbf{B}) \\ &= \mathbf{E} - i c\mathbf{B} \\ \end{aligned}

So we have

\begin{aligned}\mathbf{E} &= \frac{1}{2}(F + F^\dagger) \\ c \mathbf{B} &= \frac{1}{2i}(F - F^\dagger) \end{aligned} \quad\quad\quad(25)

## Field Energy Momentum density four vector.

In the GA formalism the energy momentum tensor is

\begin{aligned}T(a) = \frac{\epsilon_0}{2} F a \tilde{F} \end{aligned}

It is not necessarily obvious this bivector-vector-bivector product construction is even a vector quantity. Expansion of $T(\gamma_0)$ in terms of the electric and magnetic fields demonstrates this vectorial nature.

\begin{aligned}F \gamma_0 \tilde{F}&=-(\mathbf{E} + i c \mathbf{B}) \gamma_0 (\mathbf{E} + i c \mathbf{B}) \\ &=-\gamma_0 (-\mathbf{E} + i c \mathbf{B}) (\mathbf{E} + i c \mathbf{B}) \\ &=-\gamma_0 (-\mathbf{E}^2 - c^2 \mathbf{B}^2 + i c (\mathbf{B} \mathbf{E} - \mathbf{E} \mathbf{B}) ) \\ &=\gamma_0 (\mathbf{E}^2 + c^2 \mathbf{B}^2) - 2 \gamma_0 i c (\mathbf{B} \wedge \mathbf{E}) ) \\ &=\gamma_0 (\mathbf{E}^2 + c^2 \mathbf{B}^2) + 2 \gamma_0 c (\mathbf{B} \times \mathbf{E}) \\ &=\gamma_0 (\mathbf{E}^2 + c^2 \mathbf{B}^2) + 2 \gamma_0 c \gamma_k \gamma_0 (\mathbf{B} \times \mathbf{E})^k \\ &=\gamma_0 (\mathbf{E}^2 + c^2 \mathbf{B}^2) + 2 \gamma_k (\mathbf{E} \times (c \mathbf{B}))^k \\ \end{aligned}

Therefore, $T(\gamma_0)$, the energy momentum tensor biased towards a particular observer frame $\gamma_0$
is

\begin{aligned}T(\gamma_0)&=\gamma_0 \frac{\epsilon_0}{2} (\mathbf{E}^2 + c^2 \mathbf{B}^2) + \gamma_k \epsilon_0 (\mathbf{E} \times (c \mathbf{B}))^k \end{aligned} \quad\quad\quad(28)

Recognizable here in the components $T(\gamma_0)$ are the field energy density and momentum density. In particular the energy density can be obtained by dotting with $\gamma_0$, whereas the (spatial vector) momentum by wedging with $\gamma_0$.

These are

\begin{aligned}U \equiv T(\gamma_0) \cdot \gamma_0 &= \frac{1}{2} \left( \epsilon_0 \mathbf{E}^2 + \frac{1}{\mu_0} \mathbf{B}^2 \right) \\ c \mathbf{P} \equiv T(\gamma_0) \wedge \gamma_0 &= \frac{1}{\mu_0} \mathbf{E} \times \mathbf{B} \end{aligned}

In terms of the combined field these are

\begin{aligned}U &= \frac{-\epsilon_0}{2}( F \gamma_0 F \gamma_0 + \gamma_0 F \gamma_0 F) \\ c \mathbf{P} &= \frac{-\epsilon_0}{2}( F \gamma_0 F \gamma_0 - \gamma_0 F \gamma_0 F) \end{aligned}

Summarizing with the Hermitian conjugate

\begin{aligned}U &= \frac{\epsilon_0}{2}( F F^\dagger + F^\dagger F) \\ c \mathbf{P} &= \frac{\epsilon_0}{2}( F F^\dagger - F^\dagger F) \end{aligned}

### Divergence.

Calculation of the divergence produces the components of the Lorentz force densities

\begin{aligned}\nabla \cdot T(a)&= \frac{\epsilon_0}{2} {\left\langle{{ \nabla (F a F) }}\right\rangle} \\ &= \frac{\epsilon_0}{2} {\left\langle{{ (\nabla F) a F + (F \nabla) F a }}\right\rangle} \\ \end{aligned}

Here the gradient is used implicitly in bidirectional form, where the direction is implied by context. From Maxwell’s equation we have

\begin{aligned}J/\epsilon_0 c&= (\nabla F)^{\tilde{}} \\ &= (\tilde{F} \tilde{\nabla}) \\ &= -(F \nabla) \end{aligned}

and continuing the expansion

\begin{aligned}\nabla \cdot T(a)&= \frac{1}{2c} {\left\langle{{ J a F - J F a }}\right\rangle} \\ &= \frac{1}{2c} {\left\langle{{ F J a - J F a }}\right\rangle} \\ &= \frac{1}{2c} {\left\langle{{ (F J - J F) a }}\right\rangle} \\ \end{aligned}

Wrapping up, the divergence and the adjoint of the energy momentum tensor are

\begin{aligned}\nabla \cdot T(a) &= \frac{1}{c} (F \cdot J) \cdot a \\ \bar{T}(\nabla) &= F \cdot J/c \end{aligned}

When integrated over a volume, the quantities $F \cdot J/c$ are the components of the RHS of the Lorentz force equation $\dot{p} = q F \cdot v/c$.

# References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.