Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

Posts Tagged ‘faraday bivector’

Relativistic classical proton electron interaction. (a start).

Posted by peeterjoot on September 15, 2009

[Click here for a PDF of this sequence of posts with nicer formatting]


The problem of a solving for the relativistically correct trajectories of classically interacting proton and electron is one that I’ve wanted to try for a while. Conceptually this is just about the simplest interaction problem in electrodynamics (other than motion of a particle in a field), but it is not obvious to me how to even set up the right equations to solve. I should have the tools now to at least write down the equations to solve, and perhaps solve them too.

Familiarity with Geometric Algebra, and the STA form of the Maxwell and Lorentz force equation will be assumed. Writing F = \mathbf{E} + c I \mathbf{B} for the Faraday bivector, these equations are respectively

\begin{aligned}\nabla F &= J/\epsilon_0 c \\ m\frac{d^2 X}{d\tau} &= \frac{q}{c} F \cdot \frac{dX}{d\tau} \end{aligned} \quad\quad\quad(1)

The possibility of self interaction will also be ignored here. From what I have read this self interaction is more complex than regular two particle interaction.

With only Coulomb interaction.

With just Coulomb (non-relativistic) interaction setup of the equations of motion for the relative vector difference between the particles is straightforward. Let’s write this out as a reference. Whatever we come up with for the relativistic case should reduce to this at small velocities.

Fixing notation, lets write the proton and electron positions respectively by \mathbf{r}_p and \mathbf{r}_e, the proton charge as Z e, and the electron charge -e. For the forces we have

FIXME: picture

\begin{aligned}\text{Force on electron} &= m_e \frac{d^2 \mathbf{r}_e}{dt^2} = - \frac{1}{{4 \pi \epsilon_0}} Z e^2 \frac{\mathbf{r}_e - \mathbf{r}_p}{{\left\lvert{\mathbf{r}_e - \mathbf{r}_p}\right\rvert}^3} \\ \text{Force on proton} &= m_p \frac{d^2 \mathbf{r}_p}{dt^2} = \frac{1}{{4 \pi \epsilon_0}} Z e^2 \frac{\mathbf{r}_e - \mathbf{r}_p}{{\left\lvert{\mathbf{r}_e - \mathbf{r}_p}\right\rvert}^3} \end{aligned} \quad\quad\quad(3)

Subtracting the two after mass division yields the reduced mass equation for the relative motion

\begin{aligned}\frac{d^2 (\mathbf{r}_e -\mathbf{r}_p)}{dt^2} = - \frac{1}{{4 \pi \epsilon_0}} Z e^2 \left( \frac{1}{{m_e}} + \frac{1}{{m_p}}\right) \frac{\mathbf{r}_e - \mathbf{r}_p}{{\left\lvert{\mathbf{r}_e - \mathbf{r}_p}\right\rvert}^3}  \end{aligned} \quad\quad\quad(5)

This is now of the same form as the classical problem of two particle gravitational interaction, with the well known conic solutions.

Using the divergence equation instead.

While use of the Coulomb force above provides the equation of motion for the relative motion of the charges, how to generalize this to the relativistic case is not entirely clear. For the relativistic case we need to consider all of Maxwell’s equations, and not just the divergence equation. Let’s back up a step and setup the problem using the divergence equation instead of Coulomb’s law. This is a bit closer to the use of all of Maxwell’s equations.

To start off we need a discrete charge expression for the charge density, and can use the delta distribution to express this.

\begin{aligned}0 = \int d^3 x \left( \boldsymbol{\nabla} \cdot \mathbf{E} - \frac{1}{{\epsilon_0}} \left( Z e \delta^3(\mathbf{x} - \mathbf{r}_p) - e \delta^3(\mathbf{x} - \mathbf{r}_e) \right) \right) \end{aligned} \quad\quad\quad(6)

Picking a volume element that only encloses one of the respective charges gives us the Coulomb law for the field produced by those charges as above

\begin{aligned}0 &= \int_{\text{Volume around proton only}} d^3 x \left( \boldsymbol{\nabla} \cdot \mathbf{E}_p - \frac{1}{{\epsilon_0}} Z e \delta^3(\mathbf{x} - \mathbf{r}_p) \right) \\ 0 &= \int_{\text{Volume around electron only}} d^3 x \left( \boldsymbol{\nabla} \cdot \mathbf{E}_e + \frac{1}{{\epsilon_0}} e \delta^3(\mathbf{x} - \mathbf{r}_e) \right) \end{aligned} \quad\quad\quad(7)

Here \mathbf{E}_p and \mathbf{E}_e denote the electric fields due to the proton and electron respectively. Ignoring the possibility of self interaction the Lorentz forces on the particles are

\begin{aligned}\text{Force on proton/electron} = \text{charge of proton/electron times field due to electron/proton} \end{aligned}

In symbols, this is

\begin{aligned}m_p \frac{d^2 \mathbf{r}_p}{dt^2} &= Z e \mathbf{E}_e \\ m_e \frac{d^2 \mathbf{r}_e}{dt^2} &= - e \mathbf{E}_p \end{aligned} \quad\quad\quad(9)

If we were to substitute back into the volume integrals we’d have

\begin{aligned}0 &= \int_{\text{Volume around proton only}} d^3 x \left( -\frac{m_e}{e}\boldsymbol{\nabla} \cdot \frac{d^2 \mathbf{r}_e}{dt^2} - \frac{1}{{\epsilon_0}} Z e \delta^3(\mathbf{x} - \mathbf{r}_p) \right) \\ 0 &= \int_{\text{Volume around electron only}} d^3 x \left( \frac{m_p}{Z e}\boldsymbol{\nabla} \cdot \frac{d^2 \mathbf{r}_p}{dt^2} + \frac{1}{{\epsilon_0}} e \delta^3(\mathbf{x} - \mathbf{r}_e) \right) \end{aligned} \quad\quad\quad(11)

It is tempting to take the differences of these two equations so that we can write this in terms of the relative acceleration d^2 (\mathbf{r}_e - \mathbf{r}_p)/dt^2. I did just this initially, and was surprised by a mass term of the form 1/m_e - 1/m_p instead of reduced mass, which cannot be right. The key to avoiding this mistake is the proper considerations of the integration volumes. Since the volumes are different and can in fact be entirely disjoint, subtracting these is not possible. For this reason we have to be especially careful if a differential form of the divergence integrals (9) were to be used, as in

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E}_p &= \frac{1}{{\epsilon_0}} Z e \delta^3(\mathbf{x} - \mathbf{r}_p) \\ \boldsymbol{\nabla} \cdot \mathbf{E}_e &= -\frac{1}{{\epsilon_0}} e \delta^3(\mathbf{x} - \mathbf{r}_e)  \end{aligned} \quad\quad\quad(13)

The domain of applicability of these equations is no longer explicit, since each has to omit a neighborhood around the other charge. When using a delta distribution to express the point charge density it is probably best to stick with an explicit integral form.

Comparing how far we can get starting with the Gauss’s law instead of the Coulomb force, and looking forward to the relativistic case, it seems likely that solving the field equations due to the respective current densities will be the first required step. Only then can we substitute that field solution back into the Lorentz force equation to complete the search for the particle trajectories.

Relativistic interaction.

First order of business is an expression for a point charge current density four vector. Following Jackson [1], but switching to vector notation from coordinates, we can apparently employ an arbitrary parametrization for the four-vector particle trajectory R = R^\mu \gamma_\mu, as measured in the observer frame, and write

\begin{aligned}J(X) = q c \int d\lambda \frac{dX}{d\lambda} \delta^4 (X - R(\lambda)) \end{aligned} \quad\quad\quad(15)

Here X = X^\mu \gamma_\mu is the four vector event specifying the spacetime position of the current, also as measured in the observer frame. Reparameterizating in terms of time should get us back something more familiar looking

\begin{aligned}J(X) &= q c \int dt \frac{dX}{dt} \delta^4 (X - R(t)) \\ &= q c \int dt \frac{d}{dt} (c t \gamma_0 + \gamma_k X^k)\delta^4 (X - R(t)) \\ &= q c \int dt \frac{d}{dt} (c t + \mathbf{x})\delta^4 (X - R(t)) \gamma_0 \\ &= q c \int dt (c + \mathbf{v})\delta^4 (X - R(t)) \gamma_0 \\ &= q c \int dt' (c + \mathbf{v}(t'))\delta^3 (\mathbf{x} - \mathbf{r}(t')) \delta(c t' - c t) \gamma_0 \\  \end{aligned}

Note that the scaling property of the delta function implies \delta(c t) = \delta(t)/c. With the split of the four-volume delta function \delta^4(X - R(t)) = \delta^3(\mathbf{x} - \mathbf{r}(t)) \delta( {x^0}' - x^0 ), where x^0 = c t, we have an explanation for why Jackson had a factor of c in his representation. I initially thought this factor of c was due to CGS vs SI units! One more Jackson equation decoded. We are left with the following spacetime split for a point charge current density four vector

\begin{aligned}J(X) &= q (c + \mathbf{v}(t))\delta^3 (\mathbf{x} - \mathbf{r}(t)) \gamma_0  \end{aligned} \quad\quad\quad(16)

Comparing to the continuous case where we have J = \rho ( c + \mathbf{v} ) \gamma_0, it appears that this works out right. One thing worth noting is that in this time reparameterization I accidentally mixed up X, the observation event coordinates of J(X), and R, the spacetime trajectory of the particle itself. Despite this, I am saved by the delta function since no contributions to the current can occur on trajectories other than R, the worldline of the particle itself. So in the final result it should be correct to interpret \mathbf{v} as the spatial particle velocity as I did accidentally.

With the time reparameterization of the current density, we have for the field due to our proton and electron

\begin{aligned}0 &= \int d^3 x \left( \epsilon_0 c \nabla F - Z e (c + \mathbf{v}_p(t))\delta^3 (\mathbf{x} - \mathbf{r}_p(t)) + e (c + \mathbf{v}_e(t))\delta^3 (\mathbf{x} - \mathbf{r}_e(t)) \gamma_0 \right) \end{aligned} \quad\quad\quad(17)

How to write this in a more tidy covariant form? If we reparametrize with any of the other spatial coordinates, say x we end up having to integrate the field gradient with a spacetime three form (dt dy dz if parametrizing the current density with x). Since the entire equation must be zero I suppose we can just integrate that once more, and simply write

\begin{aligned}\text{constant} &= \int d^4 x \left( \nabla F - \frac{e}{\epsilon_0 c}\int d\tau \frac{dX}{d\tau} \left( Z \delta^4 (X - R_p(\tau)) - \delta^4 (X - R_e(\tau)) \right) \right) \end{aligned} \quad\quad\quad(18)

Like (7) we can pick spacetime volumes that surround just the individual particle worldlines, in which case we have a Coulomb’s law like split where the field depends on just the enclosed current. That is

\begin{aligned}\text{constant} &= \int_{\text{spacetime volume around only the proton}} d^4 x \left( \nabla F_p - \frac{Z e}{\epsilon_0 c} \int d\tau \frac{dX}{d\tau} \delta^4 (X - R_e(\tau)) \right) \\ \text{constant} &= \int_{\text{spacetime volume around only the electron}} d^4 x \left( \nabla F_e + \frac{e}{\epsilon_0 c} \int d\tau \frac{dX}{d\tau} \delta^4 (X - R_e(\tau)) \right) \end{aligned} \quad\quad\quad(19)

Here F_e is the field due to only the electron charge, whereas F_p would be that part of the total field due to the proton charge.

FIXME: attempt to draw a picture (one or two spatial dimensions) to develop some comfort with tossing out a phrase like “spacetime volume surrounding a particle worldline”.

Having expressed the equation for the total field (18), we are tracking a nice parallel to the setup for the non-relativistic treatment. Next is the pair of Lorentz force equations. As in the non-relativistic setup, if we only consider the field due to the other charge we have in
in covariant Geometric Algebra form, the following pair of proper force equations in terms of the particle worldline trajectories

\begin{aligned}\text{proper Force on electron} &= m_e \frac{d^2 R_e}{d\tau^2} = - e F_p \cdot \frac{d R_e}{c d\tau} \\ \text{proper Force on proton} &= m_p \frac{d^2 R_p}{d\tau^2} = Z e F_e \cdot \frac{d R_p}{c d\tau} \end{aligned} \quad\quad\quad(21)

We have the four sets of coupled multivector equations to be solved, so the question remains how to do so. Each of the two Lorentz force equations supplies four equations with four unknowns, and the field equations are really two sets of eight equations with six unknown field variables each. Then they are all tied up together is a big coupled mess. Wow. How do we solve this?

With (19), and (21) committed to pdf at least the first goal of writing down the equations is done.

As for the actual solution. Well, that’s a problem for another night. TO BE CONTINUED (if I can figure out an attack).


[1] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

Posted in Math and Physics Learning. | Tagged: , , , , , , , , , , | Leave a Comment »

(INCOMPLETE) Geometry of Maxwell radiation solutions

Posted by peeterjoot on August 18, 2009

[Click here for a PDF of this post with nicer formatting]


We have in GA multiple possible ways to parametrize an oscillatory time dependence for a radiation field.

This was going to be an attempt to systematically solve the resulting eigen-multivector problem, starting with the a I\hat{\mathbf{z}} \omega t exponential time parametrization, but I got stuck part way. Perhaps using a plain old I \omega t would work out better, but I’ve spent more time on this than I want for now.

Setup. The eigenvalue problem.

Again following Jackson ([1]), we use CGS units. Maxwell’s equation in these units, with F = \mathbf{E} + I\mathbf{B}/\sqrt{\mu\epsilon} is

\begin{aligned}0 &= (\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \partial_0) F  \end{aligned} \quad\quad\quad(1)

With an assumed oscillatory time dependence

\begin{aligned}F = \mathcal{F} e^{i\omega t} \end{aligned} \quad\quad\quad(2)

Maxwell’s equation reduces to a multivariable eigenvalue problem

\begin{aligned}\boldsymbol{\nabla} \mathcal{F} &= - \mathcal{F} i \lambda \\ \lambda &= \sqrt{\mu\epsilon} \frac{\omega}{c}  \end{aligned} \quad\quad\quad(3)

We have some flexibility in picking the imaginary. As well as a non-geometric imaginary i typically used for a phasor representation where we take real parts of the field, we have additional possibilities, two of which are

\begin{aligned}i &= \hat{\mathbf{x}}\hat{\mathbf{y}}\hat{\mathbf{z}} = I \\ i &= \hat{\mathbf{x}} \hat{\mathbf{y}} = I \hat{\mathbf{z}} \end{aligned} \quad\quad\quad(5)

The first is the spatial pseudoscalar, which commutes with all vectors and bivectors. The second is the unit bivector for the transverse plane, here parametrized by duality using the perpendicular to the plane direction \hat{\mathbf{z}}.

Let’s examine the geometry required of the object \mathcal{F} for each of these two geometric modeling choices.

Using the transverse plane bivector for the imaginary.

Assuming no prior assumptions about \mathcal{F} let’s allow for the possibility of scalar, vector, bivector and pseudoscalar components

\begin{aligned}F = e^{-I\hat{\mathbf{z}} \omega t} ( F_0 + F_1 + F_2 + F_3 ) \end{aligned} \quad\quad\quad(7)

Writing e^{-I\hat{\mathbf{z}} \omega t} = \cos(\omega t) -I \hat{\mathbf{z}} \sin(\omega t) = C_\omega -I \hat{\mathbf{z}} S_\omega, an expansion of this product separated into grades is

\begin{aligned}F &=   C_\omega F_0 - I S_\omega (\hat{\mathbf{z}} \wedge F_2) \\ &+ C_\omega F_1 - \hat{\mathbf{z}} S_\omega (I F_3) + S_\omega (\hat{\mathbf{z}} \times F_1)  \\ &+ C_\omega F_2 - I \hat{\mathbf{z}} S_\omega F_0 - I S_\omega (\hat{\mathbf{z}} \cdot F_2) \\ &+ C_\omega F_3 - I S_\omega (\hat{\mathbf{z}} \cdot F_1) \end{aligned}

By construction F has only vector and bivector grades, so a requirement for zero scalar and pseudoscalar for all t means that we have four immediate constraints (with \mathbf{n} \perp \hat{\mathbf{z}}.)

\begin{aligned}F_0 &= 0 & \\ F_3 &= 0 & \\ F_2 &= \hat{\mathbf{z}} \wedge \mathbf{m} \\ F_1 &= \mathbf{n}  \end{aligned}

Since we have the flexibility to add or subtract any scalar multiple of \hat{\mathbf{z}} to \mathbf{m} we can write F_2 = \hat{\mathbf{z}} \mathbf{m} where \mathbf{m} \perp \hat{\mathbf{z}}. Our field can now be written as just

\begin{aligned}F &=  C_\omega \mathbf{n} - I S_\omega (\hat{\mathbf{z}} \wedge \mathbf{n})  \\ &+ C_\omega \hat{\mathbf{z}} \mathbf{m} - I S_\omega (\hat{\mathbf{z}} \cdot (\hat{\mathbf{z}} \mathbf{m})) \end{aligned}

We can similarly require \mathbf{n} \perp \hat{\mathbf{z}}, leaving

\begin{aligned}F &= (C_\omega - I \hat{\mathbf{z}} S_\omega ) \mathbf{n}  + (C_\omega - I \hat{\mathbf{z}} S_\omega) \mathbf{m} \hat{\mathbf{z}} \end{aligned} \quad\quad\quad(8)

So, just the geometrical constraints give us

\begin{aligned}F &= e^{-I\hat{\mathbf{z}} \omega t}(\mathbf{n} + \mathbf{m} \hat{\mathbf{z}}) \end{aligned} \quad\quad\quad(9)

The first thing to be noted is that this phasor representation utilizing for the imaginary the transverse plane bivector I\hat{\mathbf{z}} cannot be the most general. This representation allows for only transverse fields! This can be seen two ways. Computing the transverse and propagation field components we have

\begin{aligned}F_z &= \frac{1}{{2}}(F + \hat{\mathbf{z}} F \hat{\mathbf{z}}) \\ &= \frac{1}{{2}} e^{-I\hat{\mathbf{z}} \omega t}( \mathbf{n} + \mathbf{m} \hat{\mathbf{z}} + \hat{\mathbf{z}} \mathbf{n} \hat{\mathbf{z}} + \hat{\mathbf{z}} \mathbf{m} \hat{\mathbf{z}} \hat{\mathbf{z}}) \\ &= \frac{1}{{2}} e^{-I\hat{\mathbf{z}} \omega t}( \mathbf{n} + \mathbf{m} \hat{\mathbf{z}} - \mathbf{n} - \mathbf{m} \hat{\mathbf{z}} ) \\ &= 0 \end{aligned}

The computation for the transverse field F_t = (F - \hat{\mathbf{z}} F \hat{\mathbf{z}})/2 shows that F = F_t as expected since the propagation component is zero.

Another way to observe this is from the split of F into electric and magnetic field components. From (8) we have

\begin{aligned}\mathbf{E} &= \cos(\omega t) \mathbf{m} + \sin(\omega t) (\hat{\mathbf{z}} \times \mathbf{m}) \\ \mathbf{B} &= \cos(\omega t) (\hat{\mathbf{z}} \times \mathbf{n}) - \sin(\omega t) \mathbf{n} \end{aligned} \quad\quad\quad(10)

The space containing each of the \mathbf{E} and \mathbf{B} vectors lies in the span of the transverse plane. We also see that there’s some potential redundancy in the representation visible here since we have four vectors describing this span \mathbf{m}, \mathbf{n}, \hat{\mathbf{z}} \times \mathbf{m}, and \hat{\mathbf{z}} \times \mathbf{n}, instead of just two.

General wave packet.

If (1) were a scalar equation for F(\mathbf{x},t) it can be readily shown using Fourier transforms the field propagation in time given initial time description of the field is

\begin{aligned}F(\mathbf{x}, t) = \int \left( \frac{1}{{(2\pi)^3}} \int F(\mathbf{x}', 0) e^{i\mathbf{k} \cdot (\mathbf{x}' -\mathbf{x})} d^3 x \right) e^{i c \mathbf{k} t/ \sqrt{\mu\epsilon}} d^3 k \end{aligned} \quad\quad\quad(12)

In traditional complex algebra the vector exponentials would not be well formed. We do not have the problem in the GA formalism, but this does lead to a contraction since the resulting F(\mathbf{x},t) cannot be scalar valued. However, by using this as a motivational tool, and
also using assumed structure for the discrete frequency infinite wavetrain phasor, we can guess that a transverse only (to z-axis) wave packet may be described by a single direction variant of the Fourier result above. That is

\begin{aligned}F(\mathbf{x}, t) = \frac{1}{{\sqrt{2\pi}}} \int e^{-I \hat{\mathbf{z}} \omega t} \mathcal{F}(\mathbf{x}, \omega)d\omega \end{aligned} \quad\quad\quad(13)

Since (13) has the same form as the earlier single frequency phasor test solution, we now know that \mathcal{F} is required to anticommute with \hat{\mathbf{z}}. Application of Maxwell’s equation to this test solution gives us

\begin{aligned}(\boldsymbol{\nabla} +\sqrt{\mu\epsilon} \partial_0) F(\mathbf{x},t) &=(\boldsymbol{\nabla} +\sqrt{\mu\epsilon} \partial_0) \frac{1}{{\sqrt{2\pi}}} \int \mathcal{F}(\mathbf{x}, \omega)e^{I \hat{\mathbf{z}} \omega t} d\omega \\ &=\frac{1}{{\sqrt{2\pi}}}\int\left(\boldsymbol{\nabla} \mathcal{F} + \mathcal{F} I \hat{\mathbf{z}} \sqrt{\mu\epsilon} \frac{\omega}{c}\right) e^{I \hat{\mathbf{z}} \omega t} d\omega \end{aligned}

This means that \mathcal{F} must satisfy the gradient eigenvalue equation for all \omega

\begin{aligned}\boldsymbol{\nabla} \mathcal{F} = -\mathcal{F} I \hat{\mathbf{z}} \sqrt{\mu\epsilon} \frac{\omega}{c}  \end{aligned} \quad\quad\quad(14)

Observe that this is the single frequency problem of equation (3), so for mono-directional light we can consider the infinite wave train instead of a wave packet with no loss of generality.

Applying separation of variables.

While this may not lead to the most general solution to the radiation problem, the transverse only propagation problem is still one of interest. Let’s see where this leads. In order to reduce the scope of the problem by one degree of freedom, let’s split out the \hat{\mathbf{z}} component of the gradient, writing

\begin{aligned}\boldsymbol{\nabla} = \boldsymbol{\nabla}_t + \hat{\mathbf{z}} \partial_z \end{aligned} \quad\quad\quad(15)

Also introduce a product split for separation of variables for the z dependence. That is

\begin{aligned}\mathcal{F} = G(x,y) Z(z) \end{aligned} \quad\quad\quad(16)

Again we are faced with the problem of too many choices for the grades of each of these factors. We can pick one of these, say Z, to have only scalar and pseudoscalar grades so that the two factors commute. Then we have

\begin{aligned}(\boldsymbol{\nabla}_t + \boldsymbol{\nabla}_z) \mathcal{F} = (\boldsymbol{\nabla}_t G) Z + \hat{\mathbf{z}} G \partial_z Z = -G Z I \hat{\mathbf{z}} \lambda  \end{aligned}

With Z in an algebra isomorphic to the complex numbers, it is necessarily invertible (and commutes with it’s derivative). Similar arguments to the grade fixing for \mathcal{F} show that G has only vector and bivector grades, but does G have the inverse required to do the separation of variables? Let’s blindly suppose that we can do this (and if we can’t we can probably fudge it since we multiply again soon after). With some rearranging we have

\begin{aligned}-\frac{1}{{G}} \hat{\mathbf{z}} (\boldsymbol{\nabla}_t G + G I \hat{\mathbf{z}} \lambda) = (\partial_z Z)\frac{1}{{Z}} = \text{constant} \end{aligned} \quad\quad\quad(17)

We want to separately equate these to a constant. In order to commute these factors we’ve only required that Z have only scalar and pseudoscalar grades, so for the constant let’s pick an arbitrary element in this subspace. That is

\begin{aligned}(\partial_z Z)\frac{1}{{Z}} = \alpha + k I \end{aligned} \quad\quad\quad(18)

The solution for the Z factor in the separation of variables is thus

\begin{aligned}Z \propto e^{(\alpha + k I)z} \end{aligned} \quad\quad\quad(19)

For G the separation of variables gives us

\begin{aligned}\boldsymbol{\nabla}_t G + (G \hat{\mathbf{z}} \lambda + \hat{\mathbf{z}} G k) I + \hat{\mathbf{z}} G \alpha = 0 \end{aligned} \quad\quad\quad(20)

We’ve now reduced the problem to something like a two variable eigenvalue problem, where the differential operator to find eigenvectors for is the transverse gradient \boldsymbol{\nabla}_t. We unfortunately have an untidy split of the eigenvalue into left and right hand factors.

While the product GZ was transverse only, we’ve now potentially lost that nice property for G itself, and do not know if G is strictly commuting or anticommuting with \hat{\mathbf{z}}. Assuming either possibility for now, we can split this multivector into transverse and propagation direction fields G = G_t + G_z

\begin{aligned}G_t &= \frac{1}{{2}}(G - \hat{\mathbf{z}} G \hat{\mathbf{z}}) \\ G_z &= \frac{1}{{2}}(G + \hat{\mathbf{z}} G \hat{\mathbf{z}}) \end{aligned} \quad\quad\quad(21)

With this split, noting that \hat{\mathbf{z}} G_t = -G_t \hat{\mathbf{z}}, and \hat{\mathbf{z}} G_z = G_z \hat{\mathbf{z}} a rearrangement of (20) produces

\begin{aligned}(\nabla_t + \hat{\mathbf{z}} ((k-\lambda) I + \alpha)) G_t = -(\nabla_t + \hat{\mathbf{z}} ((k+\lambda) I + \alpha)) G_z \end{aligned} \quad\quad\quad(23)

How do we find the eigen multivectors G_t and G_z? A couple possibilities come to mind (perhaps not encompassing all solutions). One is for one of G_t or G_z to be zero, and the other to separately require both halves of (23) equal a constant, very much like separation of variables despite the fact that both of these functions G_t and G_z are functions of x and y. The easiest non-trivial path is probably letting both sides of (23) separately equal zero, so that we are left with two independent eigen-multivector problems to solve

\begin{aligned}\nabla_t G_t &= -\hat{\mathbf{z}} ((k-\lambda) I + \alpha)) G_t \\ \nabla_t G_z &= -\hat{\mathbf{z}} ((k+\lambda) I + \alpha)) G_z \end{aligned} \quad\quad\quad(24)

Damn. have to mull this over. Don’t know where to go with it.


[1] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975.

Posted in Math and Physics Learning. | Tagged: , , , | Leave a Comment »

Covariant Maxwell equation in media (CGS units)

Posted by peeterjoot on August 10, 2009

[Click here for a PDF of this post with nicer formatting]

Motivation, some notation, and review.

Adjusting to Jackson’s of CGS ([1]) and Maxwell’s equations in matter takes some work. A first pass at a GA form was assembled in ([2]), based on what was in the introduction chapter for media that includes \mathbf{P}, and \mathbf{M} properties. He later changes conventions, and also assumes linear media in most cases, so we want something different than what was previously derived.

The non-covariant form of Maxwell’s equation in absence of current and charge has been convenient to use in some initial attempts to look at wave propagation. That was

\begin{aligned}F &= \mathbf{E} + I\mathbf{B}/\sqrt{\mu\epsilon} \\ 0 &= (\boldsymbol{\nabla} + \sqrt{\mu\epsilon} \partial_0) F \end{aligned} \quad\quad\quad(1)

To examine the energy momentum tensor, it is desirable to express this in a fashion that has no such explicit spacetime dependence. This suggests a spacetime gradient definition that varies throughout the media.

\begin{aligned}\nabla \equiv \gamma^m \partial_m + \sqrt{\mu\epsilon} \gamma^0 \partial_0 \end{aligned} \quad\quad\quad(3)

Observe that this spacetime gradient is adjusted by the speed of light in the media, and isn’t one that is naturally relativistic. Even though the differential form of Maxwell’s equation is implicitly defined only in a neighborhood of the point it is evaluated at, we now have a reason to say this explicitly, because this non-isotropic condition is now hiding in the (perhaps poor) notation for the operator. Ignoring the obscuring nature of this operator, and working with it, we can can that Maxwell’s equation in the neighborhood (where \mu\epsilon is “fixed”) is

\begin{aligned}\nabla F = 0 \end{aligned} \quad\quad\quad(4)

We also want a variant of this that includes the charge and current terms.

Linear media.

Lets pick Jackson’s equation (6.70) as the starting point. A partial translation to GA form, with \mathbf{D} = \epsilon \mathbf{E}, and \mathbf{B} = \mu \mathbf{H}, and \partial_0 = \partial/\partial ct is

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{B} &= 0 \\ \boldsymbol{\nabla} \cdot \epsilon \mathbf{E} &= 4 \pi \rho \\ -I \boldsymbol{\nabla} \wedge \mathbf{E} + \partial_0 \mathbf{B} &= 0 \\ -I \boldsymbol{\nabla} \wedge \mathbf{B}/\mu - \partial_0 \epsilon \mathbf{E} &= \frac{4 \pi}{c} \mathbf{J} \end{aligned} \quad\quad\quad(5)

Scaling and adding we have

\begin{aligned}\boldsymbol{\nabla} \mathbf{E} + \partial_0 I \mathbf{B} &= \frac{4 \pi \rho}{\epsilon} \\ \boldsymbol{\nabla} \mathbf{B} - I \partial_0 \mu \epsilon \mathbf{E} &= \frac{4 \pi \mu I}{c} \mathbf{J} \end{aligned} \quad\quad\quad(9)

Once last scaling prepares for addition of these last two equations

\begin{aligned}\boldsymbol{\nabla} \mathbf{E} + \sqrt{\mu\epsilon}\partial_0 I \mathbf{B}/\sqrt{\mu\epsilon} &= \frac{4 \pi \rho}{\epsilon} \\ \boldsymbol{\nabla} I \mathbf{B}/\sqrt{\mu\epsilon} + \partial_0 \sqrt{\mu \epsilon} \mathbf{E} &= -\frac{4 \pi \mu }{c\sqrt{\mu\epsilon}}\mathbf{J} \end{aligned} \quad\quad\quad(11)

This gives us a non-covariant assembly of Maxwell’s equations in linear media

\begin{aligned}(\boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0) F &= \frac{4 \pi}{c} \left( \frac{c \rho}{\epsilon} - \sqrt{\frac{\mu}{\epsilon}} \mathbf{J} \right) \end{aligned} \quad\quad\quad(13)

Premultiplication by \gamma_0, and utilizing the definition of (3) we have

\begin{aligned}\nabla F &= \frac{4 \pi}{c} \left( c \frac{\rho}{\epsilon} \gamma_0 + \sqrt{\frac{\mu}{\epsilon}} J^m \gamma_m \right) \end{aligned} \quad\quad\quad(14)

We can then define

\begin{aligned}J \equiv \frac{c \rho}{\epsilon} \gamma_0 + \sqrt{\frac{\mu}{\epsilon}} J^m \gamma_m \end{aligned} \quad\quad\quad(15)

and are left with an expression of Maxwell’s equation that puts space and time on a similar footing. It’s probably not really right to call this a covariant expression since it isn’t naturally relativistic.

\begin{aligned}\nabla F &= \frac{4 \pi}{c} J \end{aligned} \quad\quad\quad(16)

Energy momentum tensor.

My main goal was to find the GA form of the stress energy tensor in media. With the requirement for both an alternate spacetime gradient and the inclusion of the scaling factors for the media it is not obviously clear to me how to do translate from the vacuum expression in SI units to the CGS in media form. It makes sense to step back to see how the divergence conservation equation translates with both of these changes. In SI units our tensor (a four vector parametrized by another direction vector a) was

\begin{aligned}T(a) \equiv \frac{-1}{2\epsilon_0} F a F \end{aligned} \quad\quad\quad(17)

Ignoring units temporarily, let’s calculate the media-spacetime divergence of -FaF/2. That is

\begin{aligned}-\frac{1}{{2}} \nabla \cdot (FaF)&=-\frac{1}{{2}} \left\langle{{\nabla (FaF)}}\right\rangle \\ &=-\frac{1}{{2}} \left\langle{{(F(\stackrel{ \rightarrow }\nabla F) + (F\stackrel{ \leftarrow }\nabla)F) a}}\right\rangle \\ &=-\frac{4\pi}{c} \left\langle{{\frac{1}{{2}}(F J - J F) a}}\right\rangle \\ &=-\frac{4\pi}{c} (F \cdot J) \cdot a \\  \end{aligned}

We want the T^{\mu 0} components of the tensor T(\gamma_0). Noting the anticommutation relation for the pseudoscalar I \gamma_0 = -\gamma_0 I, and the anticommutation behavior for spatial vectors such as \mathbf{E} \gamma_0 = -\gamma_0 we have

\begin{aligned}-\frac{1}{{2}} (\mathbf{E} + I\mathbf{B}/\sqrt{\mu\epsilon}) \gamma_0 (\mathbf{E} + I\mathbf{B}/\sqrt{\mu\epsilon})&=\frac{\gamma_0}{2} (\mathbf{E} - I\mathbf{B}/\sqrt{\mu\epsilon}) (\mathbf{E} + I\mathbf{B}/\sqrt{\mu\epsilon}) \\ &=\frac{\gamma_0}{2} \left( (\mathbf{E}^2 + \mathbf{B}^2/{\mu\epsilon}) + I \frac{1}{{\sqrt{\mu\epsilon}}} (\mathbf{E}\mathbf{B} - \mathbf{B}\mathbf{E}) \right) \\ &=\frac{1}{2} (\mathbf{E}^2 + \mathbf{B}^2/{\mu\epsilon}) + \gamma_0 I \frac{1}{{\sqrt{\mu\epsilon}}} (\mathbf{E} \wedge \mathbf{B}) \\ &=\frac{\gamma_0}{2} (\mathbf{E}^2 + \mathbf{B}^2/{\mu\epsilon}) - \gamma_0 \frac{1}{{\sqrt{\mu\epsilon}}} (\mathbf{E} \times \mathbf{B}) \\ &=\frac{\gamma_0}{2} (\mathbf{E}^2 + \mathbf{B}^2/{\mu\epsilon}) - \gamma_0 \frac{1}{{\sqrt{\mu\epsilon}}} \gamma_m \gamma_0 (\mathbf{E} \times \mathbf{B})^m \\ &=\frac{\gamma_0}{2} (\mathbf{E}^2 + \mathbf{B}^2/{\mu\epsilon}) + \frac{1}{{\sqrt{\mu\epsilon}}} \gamma_m (\mathbf{E} \times \mathbf{B})^m \\  \end{aligned}

Calculating the divergence of this using the media spacetime gradient we have

\begin{aligned}\nabla \cdot \left( -\frac{1}{{2}} F \gamma_0 F \right)&=\frac{\sqrt{\mu\epsilon}}{c} \frac{\partial}{\partial t} \frac{1}{2} \left(\mathbf{E}^2 + \frac{1}{{\mu\epsilon}}\mathbf{B}^2\right)+ \sum_m\frac{\partial}{\partial x^m} \left( \frac{1}{{\sqrt{\mu\epsilon}}} (\mathbf{E} \times \mathbf{B})^m \right) \\ &=\frac{\sqrt{\mu\epsilon}}{c} \frac{\partial}{\partial t} \frac{1}{2} \left(\mathbf{E}^2 + \frac{1}{{\mu\epsilon}}\mathbf{B}^2 \right)+ \boldsymbol{\nabla} \cdot \left( \frac{1}{{\sqrt{\mu\epsilon}}} (\mathbf{E} \times \mathbf{B})^m \right) \end{aligned}

Multiplying this by (c/4\pi) \sqrt{\epsilon/\mu}, we have

\begin{aligned}\nabla \cdot \left( -\frac{c}{8 \pi} \sqrt{\frac{\epsilon}{\mu}} F \gamma_0 F \right)&=\frac{\partial}{\partial t} \frac{1}{2} \left(\mathbf{E} \cdot \mathbf{D} + \mathbf{B} \cdot \mathbf{H} \right) + \boldsymbol{\nabla} \cdot \frac{c}{4\pi}(\mathbf{E} \times \mathbf{H}) \\ &=- \sqrt{\frac{\epsilon}{\mu}}(F \cdot J) \cdot \gamma_0 \\  \end{aligned}

Now expand the RHS. We have

\begin{aligned}\sqrt{\frac{\epsilon}{\mu}}(F \cdot J) \cdot \gamma_0&=\left((\mathbf{E} + I \mathbf{B}/\sqrt{\mu\epsilon}) \cdot \left( \frac{\rho}{\sqrt{\mu\epsilon}} \gamma_0 + J^m \gamma_m \right) \right) \cdot \gamma_0  \\ &=\left\langle{{E^q \gamma_q \gamma_0 J^m \gamma_m \gamma_0}}\right\rangle \\ &=\mathbf{E} \cdot \mathbf{J} \end{aligned}

Assembling results the energy conservation relation, first in covariant form is

\begin{aligned}\nabla \cdot \left( -\frac{c}{8 \pi} \sqrt{\frac{\epsilon}{\mu}} F a F \right) &= - \sqrt{\frac{\epsilon}{\mu}}(F \cdot J) \cdot a \end{aligned} \quad\quad\quad(18)

and the same with an explicit spacetime split in vector quantities is

\begin{aligned}\frac{\partial}{\partial t} \frac{1}{2} \left(\mathbf{E} \cdot \mathbf{D} + \mathbf{B} \cdot \mathbf{H} \right) + \boldsymbol{\nabla} \cdot \frac{c}{4\pi}(\mathbf{E} \times \mathbf{H})&=-\mathbf{E} \cdot \mathbf{J} \end{aligned} \quad\quad\quad(19)

The first of these two (18) is what I was after for application to optics where the radiation field in media can be expressed directly in terms of F instead of \mathbf{E} and \mathbf{B}. The second sets the dimensions appropriately and provides some confidence in the result since we can compare to the well known Poynting results in these units.


[1] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975.

[2] Peeter Joot. Macroscopic Maxwell’s equation [online].

Posted in Math and Physics Learning. | Tagged: , , | Leave a Comment »

Comparing phasor and geometric transverse solutions to the Maxwell equation (continued)

Posted by peeterjoot on August 9, 2009

Continuing with previous post.

[Click here for a PDF of this post with nicer formatting]

Explicit split of geometric phasor into advanced and receding parts

For a more general split of the geometric phasor into advanced and receding wave terms, will there be interdependence between the electric and magnetic field components? Going back to (16), and rearranging, we have

\begin{aligned}2 e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} }&=(C_{-} -I S_{-})+\hat{\mathbf{k}} (C_{-} -I S_{-} )+(C_{+} +I S_{+})-\hat{\mathbf{k}} (C_{+} +I S_{+}) \\  \end{aligned}

So we have

\begin{aligned}e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} }&=\frac{1}{{2}}(1 + \hat{\mathbf{k}})e^{-I(\omega t - \mathbf{k} \cdot \mathbf{x})}+\frac{1}{{2}}(1 - \hat{\mathbf{k}})e^{I(\omega t + \mathbf{k} \cdot \mathbf{x})} \end{aligned} \quad\quad\quad(19)

As observed if we have \hat{\mathbf{k}} \mathcal{F} = \mathcal{F}, the result is only the advanced wave term

\begin{aligned}e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } \mathcal{F} = e^{-I(\omega t - \mathbf{k} \cdot \mathbf{x})} \mathcal{F} \end{aligned}

Similarly, with absorption of \hat{\mathbf{k}} with the opposing sign \hat{\mathbf{k}} \mathcal{F} = -\mathcal{F}, we have only the receding wave

\begin{aligned}e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } \mathcal{F} = e^{I(\omega t + \mathbf{k} \cdot \mathbf{x})} \mathcal{F} \end{aligned}

Either of the receding or advancing wave solutions should independently satisfy the Maxwell equation operator. Let’s verify both of these, and verify that for either the \pm cases the following is a solution and examine the constraints for that to be the case.

\begin{aligned}F = \frac{1}{{2}}(1 \pm \hat{\mathbf{k}}) e^{\pm I(\omega t \pm \mathbf{k} \cdot \mathbf{x})} \mathcal{F} \end{aligned} \quad\quad\quad(20)

Now we wish to apply the Maxwell equation operator \boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0 to this assumed solution. That is

\begin{aligned}0 &= (\boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0) F \\ &= \sigma_m \frac{1}{{2}}(1 \pm \hat{\mathbf{k}}) (\pm I \pm k^m) e^{\pm I(\omega t \pm \mathbf{k} \cdot \mathbf{x})} \mathcal{F}+ \frac{1}{{2}}(1 \pm \hat{\mathbf{k}}) (\pm I \sqrt{\mu\epsilon}\omega/c) e^{\pm I(\omega t \pm \mathbf{k} \cdot \mathbf{x})} \mathcal{F} \\ &= \frac{\pm I}{2}\left(\pm \hat{\mathbf{k}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right)(1 \pm \hat{\mathbf{k}}) e^{\pm I(\omega t \pm \mathbf{k} \cdot \mathbf{x})} \mathcal{F} \end{aligned}

By left multiplication with the conjugate of the Maxwell operator \nabla - \sqrt{\mu\epsilon}\partial_0 we have the wave equation operator, and applying that, we have as before, a magnitude constraint on the wave number \mathbf{k}

\begin{aligned}0 &= (\boldsymbol{\nabla} - \sqrt{\mu\epsilon}\partial_0) (\boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0) F \\ &= (\boldsymbol{\nabla}^2 - {\mu\epsilon}\partial_{00}) F \\ &= \frac{-1}{2}(1 \pm \hat{\mathbf{k}}) \left( \mathbf{k}^2 - \mu\epsilon\frac{\omega^2}{c^2}\right) e^{\pm I(\omega t \pm \mathbf{k} \cdot \mathbf{x})} \mathcal{F} \end{aligned}

So we have as before {\left\lvert{\mathbf{k}}\right\rvert} = \sqrt{\mu\epsilon}\omega/c. Substitution into the the first order operator result we have

\begin{aligned}0 &= (\boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0) F \\ &= \frac{\pm I}{2}\sqrt{\mu\epsilon}\frac{\omega}{c}\left(\pm \hat{\mathbf{k}} + 1\right)(1 \pm \hat{\mathbf{k}}) e^{\pm I(\omega t \pm \mathbf{k} \cdot \mathbf{x})} \mathcal{F} \end{aligned}

Observe that the multivector 1 \pm \hat{\mathbf{k}}, when squared is just a multiple of itself

\begin{aligned}(1 \pm \hat{\mathbf{k}})^2 = 1 + \hat{\mathbf{k}}^2 \pm 2 \hat{\mathbf{k}} = 2 (1 \pm \hat{\mathbf{k}}) \end{aligned}

So we have

\begin{aligned}0 &= (\boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0) F \\ &= {\pm I}\sqrt{\mu\epsilon}\frac{\omega}{c}(1 \pm \hat{\mathbf{k}}) e^{\pm I(\omega t \pm \mathbf{k} \cdot \mathbf{x})} \mathcal{F} \end{aligned}

So we see that the constraint again on the individual assumed solutions is again that of absorption. Separately the advanced or receding parts of the geometric phasor as expressed in (20) are solutions provided

\begin{aligned}\hat{\mathbf{k}} F = \mp F \end{aligned} \quad\quad\quad(21)

The geometric phasor is seen to be curious superposition of both advancing and receding states. Independently we have something pretty much like the standard transverse phasor wave states. Is this superposition state physically meaningful. It is a solution to the Maxwell equation (without any constraints on \boldsymbol{\mathcal{E}} and \boldsymbol{\mathcal{B}}).

Posted in Math and Physics Learning. | Tagged: , , , , | Leave a Comment »

Comparing phasor and geometric transverse solutions to the Maxwell equation

Posted by peeterjoot on August 8, 2009

[Click here for a PDF of this post with nicer formatting]


In ([1]) a phasor like form of the transverse wave equation was found by considering Fourier solutions of the Maxwell equation. This will be called the “geometric phasor” since it is hard to refer and compare it without giving it a name. Curiously no perpendicularity condition for \mathbf{E} and \mathbf{B} seemed to be required for this geometric phasor. Why would that be the case? In Jackson’s treatment, which employed the traditional dot and cross product form of Maxwell’s equations, this followed by back substituting the assumed phasor solution back into the equations. This back substitution wasn’t done in ([1]). If we attempt this we should find the same sort of additional mutual perpendicularity constraints on the fields.

Here we start with the equations from Jackson ([2], ch7), expressed in GA form. Using the same assumed phasor form we should get the same results using GA. Anything else indicates a misunderstanding or mistake, so as an intermediate step we should at least recover the Jackson result.

After using a more traditional phasor form (where one would have to take real parts) we revisit the goemetric phasor found in ([1]). It will be found that the perpendicular constraints of the Jackson phasor solution lead to a representation where the geometric phasor is reduced to the Jackson form with a straight substitution of the imaginary i with the pseudoscalar I = \sigma_1\sigma_2\sigma_3. This representation however, like the more general geometric phasor requires no selection of real or imaginary parts to construct a “physical” solution.

With assumed phasor field

Maxwell’s equations in absence of charge and current ((7.1) of Jackson) can be summarized by

\begin{aligned}0 &= (\boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0) F  \end{aligned} \quad\quad\quad(1)

The F above is a composite electric and magnetic field merged into a single multivector. In the spatial basic the electric field component \mathbf{E} is a vector, and the magnetic component I\mathbf{B} is a bivector (in the Dirac basis both are bivectors).

\begin{aligned}F &= \mathbf{E} + I \mathbf{B}/\sqrt{\mu\epsilon} \end{aligned} \quad\quad\quad(2)

With an assumed phasor form

\begin{aligned}F = \mathcal{F} e^{ i(\mathbf{k} \cdot \mathbf{x} - \omega t) } = (\boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}}/\sqrt{\mu\epsilon}) e^{ i(\mathbf{k} \cdot \mathbf{x} - \omega t) } \end{aligned} \quad\quad\quad(3)

Although there are many geometric multivectors that square to -1, we do not assume here that the imaginary i has any specific geometric meaning, and in fact commutes with all multivectors. Because of this we have to take the real parts later when done.

Operating on F with Maxwell’s equation we have

\begin{aligned}0 = (\boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0) F = i \left( \mathbf{k} - \sqrt{\mu\epsilon}\frac{\omega}{c} \right) F  \end{aligned} \quad\quad\quad(4)

Similarly, left multiplication of Maxwell’s equation by the conjugate operator \boldsymbol{\nabla} - \sqrt{\mu\epsilon}\partial_0, we have the wave equation

\begin{aligned}0 &= \left(\boldsymbol{\nabla}^2 - \frac{\mu\epsilon}{c^2}\frac{\partial^2}{\partial t^2}\right) F  \end{aligned} \quad\quad\quad(5)

and substitution of the assumed phasor solution gives us

\begin{aligned}0 = (\boldsymbol{\nabla}^2 - {\mu\epsilon}\partial_{00}) F = -\left( \mathbf{k}^2 - {\mu\epsilon}\frac{\omega^2}{c^2} \right) F  \end{aligned} \quad\quad\quad(6)

This provides the relation between the magnitude of \mathbf{k} and \omega, namely

\begin{aligned}{\left\lvert{\mathbf{k}}\right\rvert} = \pm \sqrt{\mu\epsilon}\frac{\omega}{c}  \end{aligned} \quad\quad\quad(7)

Without any real loss of generality we can pick the positive root, so the result of the Maxwell equation operator on the phasor is

\begin{aligned}0 = (\boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0) F = i \sqrt{\mu\epsilon}\frac{\omega}{c} \left( \hat{\mathbf{k}} - 1\right) F  \end{aligned} \quad\quad\quad(8)

Rearranging we have the curious property that the field F can “swallow” a left multiplication by the propagation direction unit vector

\begin{aligned}\hat{\mathbf{k}} F = F  \end{aligned} \quad\quad\quad(9)

Selection of the scalar and pseudoscalar grades of this equation shows that the electric and magnetic fields \mathbf{E} and \mathbf{B} are both completely transverse to the propagation direction \hat{\mathbf{k}}. For the scalar grades we have

\begin{aligned}0 &= \left\langle{{\hat{\mathbf{k}} F - F}}\right\rangle \\   &= \hat{\mathbf{k}} \cdot \mathbf{E} \end{aligned}

and for the pseudoscalar

\begin{aligned}0 &= {\left\langle{{\hat{\mathbf{k}} F - F}}\right\rangle}_{3} \\   &= I \hat{\mathbf{k}} \cdot \mathbf{B} \end{aligned}

From this we have \hat{\mathbf{k}} \cdot \mathbf{B} = \hat{\mathbf{k}} \cdot \mathbf{B} = 0. Because of this transverse property we see that the \hat{\mathbf{k}} multiplication of F in (9) serves to map electric field (vector) components into bivectors, and the magnetic bivector components into vectors. For the result to be the same means we must have an additional coupling between the field components. Writing out (9) in terms of the field components we have

\begin{aligned}\mathbf{E} + I\mathbf{B}/\sqrt{\mu\epsilon} &= \hat{\mathbf{k}} (\mathbf{E} + I\mathbf{B}/\sqrt{\mu\epsilon} ) \\ &= \hat{\mathbf{k}} \wedge \mathbf{E} + I (\hat{\mathbf{k}} \wedge \mathbf{B})/\sqrt{\mu\epsilon}  \\ &= I \hat{\mathbf{k}} \times \mathbf{E} + I^2 (\hat{\mathbf{k}} \times \mathbf{B})/\sqrt{\mu\epsilon}  \end{aligned}

Equating left and right hand grades we have

\begin{aligned}\mathbf{E} &= -(\hat{\mathbf{k}} \times \mathbf{B})/\sqrt{\mu\epsilon} \\ \mathbf{B} &= \sqrt{\mu\epsilon} (\hat{\mathbf{k}} \times \mathbf{E}) \end{aligned} \quad\quad\quad(10)

Since \mathbf{E} and \mathbf{B} both have the same phase relationships we also have

\begin{aligned}\boldsymbol{\mathcal{E}} &= -(\hat{\mathbf{k}} \times \boldsymbol{\mathcal{B}})/\sqrt{\mu\epsilon} \\ \boldsymbol{\mathcal{B}} &= \sqrt{\mu\epsilon} (\hat{\mathbf{k}} \times \boldsymbol{\mathcal{E}}) \end{aligned} \quad\quad\quad(12)

With phasors as used in electrical engineering it is usual to allow the fields to have complex values. Assuming this is allowed here too, taking real parts of F, and separating by grade, we have for the electric and magnetic fields

\begin{aligned}\begin{pmatrix}\mathbf{E} \\ \mathbf{B}\end{pmatrix}=\text{Real}\begin{pmatrix}\boldsymbol{\mathcal{E}} \\ \boldsymbol{\mathcal{B}}\end{pmatrix}\cos(\mathbf{k} \cdot \mathbf{x} - \omega t)+\text{Imag}\begin{pmatrix}\boldsymbol{\mathcal{E}} \\ \boldsymbol{\mathcal{B}}\end{pmatrix}\sin(\mathbf{k} \cdot \mathbf{x} - \omega t) \end{aligned} \quad\quad\quad(14)

We will find a slightly different separation into electric and magnetic fields with the geometric phasor.

Geometrized phasor.

Translating from SI units to the CGS units of Jackson the geometric phasor representation of the field was found previously to be

\begin{aligned}F = e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } (\boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}}/\sqrt{\mu\epsilon}) \end{aligned} \quad\quad\quad(15)

As above the transverse requirement \boldsymbol{\mathcal{E}} \cdot \mathbf{k} = \boldsymbol{\mathcal{B}} \cdot \mathbf{k} = 0 was required. Application of Maxwell’s equation operator should show if we require any additional constraints. That is

\begin{aligned}0 &= (\boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0) F \\ &=(\boldsymbol{\nabla} + \sqrt{\mu\epsilon}\partial_0) e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } (\boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}}/\sqrt{\mu\epsilon}) \\ &=\sum \sigma_m e^{ -I \hat{\mathbf{k}} \omega t } (I k^m) e^{ I \mathbf{k} \cdot \mathbf{x} } (\boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}}/\sqrt{\mu\epsilon}) -I \hat{\mathbf{k}} \sqrt{\mu\epsilon} \frac{\omega}{c} e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } (\boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}}/\sqrt{\mu\epsilon}) \\ &=I \left(\mathbf{k} - \hat{\mathbf{k}} \sqrt{\mu\epsilon} \frac{\omega}{c} \right) e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } (\boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}}/\sqrt{\mu\epsilon})  \end{aligned}

This is zero for any combinations of \boldsymbol{\mathcal{E}} or \boldsymbol{\mathcal{B}} since \mathbf{k} = \hat{\mathbf{k}} \sqrt{\mu\epsilon} \omega/c. It therefore appears that this geometric phasor has a fundamentally different nature than the non-geometric version. We have two exponentials that commute, but due to the difference in grades of the arguments, it doesn’t appear that there is any easy way to express this as an single argument exponential. Multiplying these out, and using the trig product to sum identities helps shed some light on the differences between the geometric phasor and the one using a generic imaginary. Starting off we have

\begin{aligned}e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} }&=(\cos(\omega t) -I\hat{\mathbf{k}} \sin(\omega t)) (\cos(\mathbf{k} \cdot \mathbf{x}) +I\sin(\mathbf{k} \cdot \mathbf{x})) \\ &=\cos(\omega t)\cos(\mathbf{k} \cdot \mathbf{x}) + \hat{\mathbf{k}} \sin(\omega t)\sin(\mathbf{k} \cdot \mathbf{x})-I\hat{\mathbf{k}} \sin(\omega t)\cos(\mathbf{k} \cdot \mathbf{x})+I \cos(\omega t) \sin(\mathbf{k} \cdot \mathbf{x}) \\  \end{aligned}

In this first expansion we see that this product of exponentials has scalar, vector, bivector, and pseudoscalar grades, despite the fact that we have only
vector and bivector terms in the end result. That will be seen to be due to the transverse nature of \mathcal{F} that we multiply with. Before performing that final multiplication, writing C_{-} = \cos(\omega t - \mathbf{k} \cdot \mathbf{x}), C_{+} = \cos(\omega t + \mathbf{k} \cdot \mathbf{x}), S_{-} = \sin(\omega t - \mathbf{k} \cdot \mathbf{x}), and S_{+} = \sin(\omega t + \mathbf{k} \cdot \mathbf{x}), we have

\begin{aligned}e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} }&=\frac{1}{{2}}\left( (C_{-} + C_{+})+\hat{\mathbf{k}} (C_{-} - C_{+})-I \hat{\mathbf{k}} (S_{-} + S_{+})-I (S_{-} - S_{+})\right) \end{aligned} \quad\quad\quad(16)

As an operator the left multiplication of \hat{\mathbf{k}} on a transverse vector has the action

\begin{aligned}\hat{\mathbf{k}} ( \cdot ) &= \hat{\mathbf{k}} \wedge (\cdot) \\ &= I (\hat{\mathbf{k}} \times (\cdot)) \\  \end{aligned}

This gives

\begin{aligned}e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} }&=\frac{1}{{2}}\left( (C_{-} + C_{+})+(C_{-} - C_{+}) I \hat{\mathbf{k}} \times+(S_{-} + S_{+}) \hat{\mathbf{k}} \times-I (S_{-} - S_{+})\right) \end{aligned} \quad\quad\quad(17)

Now, lets apply this to the field with \mathcal{F} = \boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}}/\sqrt{\mu\epsilon}. To avoid dragging around the \sqrt{\mu\epsilon} factors, let’s also temporarily
work with units where \mu\epsilon = 1. We then have

\begin{aligned}2 e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } \mathcal{F}&= (C_{-} + C_{+}) (\boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}}) \\ &+ (C_{-} - C_{+}) (I (\hat{\mathbf{k}} \times \boldsymbol{\mathcal{E}}) - \hat{\mathbf{k}} \times \boldsymbol{\mathcal{B}}) \\ &+ (S_{-} + S_{+}) (\hat{\mathbf{k}} \times \boldsymbol{\mathcal{E}} +I (\hat{\mathbf{k}} \times \boldsymbol{\mathcal{B}}))  \\ &+ (S_{-} - S_{+}) (-I \boldsymbol{\mathcal{E}} + \boldsymbol{\mathcal{B}})  \end{aligned}

Rearranging explicitly in terms of the electric and magnetic field components this is

\begin{aligned}2 e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } \mathcal{F}&= (C_{-} + C_{+}) \boldsymbol{\mathcal{E}} -(C_{-} - C_{+}) (\hat{\mathbf{k}} \times \boldsymbol{\mathcal{B}})+(S_{-} + S_{+}) (\hat{\mathbf{k}} \times \boldsymbol{\mathcal{E}})+(S_{-} - S_{+}) \boldsymbol{\mathcal{B}} \\ &+{I}\left(  (C_{-} + C_{+}) \boldsymbol{\mathcal{B}}+(C_{-} - C_{+}) (\hat{\mathbf{k}} \times \boldsymbol{\mathcal{E}})+(S_{-} + S_{+}) (\hat{\mathbf{k}} \times \boldsymbol{\mathcal{B}})-(S_{-} - S_{+}) \boldsymbol{\mathcal{E}} \right) \\  \end{aligned}

Quite a mess! A first observation is that the application of the perpendicularity conditions (12) we have a remarkable reduction in complexity. That is

\begin{aligned}2 e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } \mathcal{F}&= (C_{-} + C_{+}) \boldsymbol{\mathcal{E}} +(C_{-} - C_{+}) \boldsymbol{\mathcal{E}}+(S_{-} + S_{+}) \boldsymbol{\mathcal{B}}+(S_{-} - S_{+}) \boldsymbol{\mathcal{B}}\\ &+{I}\left(  (C_{-} + C_{+}) \boldsymbol{\mathcal{B}}+(C_{-} - C_{+}) \boldsymbol{\mathcal{B}}-(S_{-} + S_{+}) \boldsymbol{\mathcal{E}}-(S_{-} - S_{+}) \boldsymbol{\mathcal{E}} \right) \\  \end{aligned}

This wipes out the receding wave terms leaving only the advanced wave terms, leaving

\begin{aligned}e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } \mathcal{F}&= C_{-} \boldsymbol{\mathcal{E}} +S_{-} (\hat{\mathbf{k}} \times \boldsymbol{\mathcal{E}})+{I}\left(  C_{-} \boldsymbol{\mathcal{B}} +S_{-} \hat{\mathbf{k}} \times \boldsymbol{\mathcal{B}} \right) \\ &= C_{-} (\boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}})+S_{-} (\boldsymbol{\mathcal{B}} -I\boldsymbol{\mathcal{E}}) \\ &=( C_{-} -I S_{-} ) (\boldsymbol{\mathcal{E}} + I\boldsymbol{\mathcal{B}}) \\  \end{aligned}

We see therefore for this special case of mutually perpendicular (equ-magnitude) field components, our geometric phasor has only the advanced wave term

\begin{aligned}e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } \mathcal{F} &= e^{-I(\omega t - \mathbf{k} \cdot \mathbf{x})} \mathcal{F} \end{aligned} \quad\quad\quad(18)

If we pick this as the starting point for the assumed solution, it is clear that the same perpendicularity constraints will follow as in Jackson’s treatment, or the GA version of it above. We have something that is slightly different though, for we have no requirement to take real parts of this simpified geometric phasor, since the result already contains just the vector and bivector terms of the electric and magnetic fields respectively.

A small aside, before continuing. Having made this observation that we can write the assumed phasor for this transverse field in the form of (18) an easier way to demonstrate that the product of exponentials reduces only to the advanced wave term is now clear. Instead of using (12) we could start back at (16) and employ the absorption property \hat{\mathbf{k}} \mathcal{F} = \mathcal{F}. That gives

\begin{aligned}e^{ -I \hat{\mathbf{k}} \omega t } e^{ I \mathbf{k} \cdot \mathbf{x} } \mathcal{F}&=\frac{1}{{2}}\left( (C_{-} + C_{+})+(C_{-} - C_{+})-I (S_{-} + S_{+})-I (S_{-} - S_{+})\right) \mathcal{F} \\ &=\left( C_{-} -I S_{-} \right) \mathcal{F} \end{aligned}

That’s the same result, obtained in a slicker manner. What is perhaps of more interest is examining the general split of our geometric phasor into advanced and receding wave terms, and examining the interdependence, if any, between the electric and magnetic field components. Since this didn’t lead exactly to where I expected, that’s now left as a project for a different day.


[1] Peeter Joot. {Space time algebra solutions of the Maxwell equation for discrete frequencies} [online].

[2] JD Jackson. Classical Electrodynamics Wiley. 2nd edition, 1975.

Posted in Math and Physics Learning. | Tagged: , , , , | Leave a Comment »

(A POSSIBLY WRONG) Superposition of transverse electromagnetic field solutions.

Posted by peeterjoot on August 3, 2009

[Click here for a PDF of this post with nicer formatting]


FIXME: I’M NOT CONFIDENT THAT i GOT THIS RIGHT. An attempt to verify the final result didn’t work. I’ve either messed up along the way (perhaps right at the beginning), or my verification itself was busted. Am posting these working notes for now, and will revisit later after thinking it through again (or trying the verification again).


In ([1]), a Geometric Algebra solution for the transverse components of the electromagnetic field was found. Here we construct a superposition of these transverse fields, keeping the propagation direction fixed, and allowing for continuous variation of the wave number and angular velocity. Evaluation of this superposition integral, first utilizing a contour integral, then utilizing an inverse Fourier transform allows for the determination of the functional form of a general wave packet moving along a fixed direction in space. This wave packet will be seen to have two time dependencies, an advanced time term, and a retarded time term. The phasors will be eliminated from both the propagation and the transverse fields and will provide operators with which the transverse field can be calculated from a general propagation field wave packet.

Superposition of phasor solutions.

When the field is required to have explicit sinusoidal time and propagation direction, the field components in the transverse (to the propagation) direction were found to be

\begin{aligned}F_t &= \frac{1}{{i \left( \pm k \hat{\mathbf{z}} - \sqrt{\mu\epsilon}\frac{\omega}{c}\right) }} \boldsymbol{\nabla}_t F_z \\ &= \frac{i}{k^2 - \mu\epsilon\frac{\omega^2}{c^2}} \left( \pm k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t F_z \end{aligned} \quad\quad\quad(1)

Here F_z = F_z(x,y) = \mathbf{E}_z(x,y) + I\mathbf{B}_z(x,y)/\sqrt{\mu\epsilon} is required by construction to commute with \hat{\mathbf{z}}, but is otherwise arbitrary, except perhaps for boundary value constraints not considered here.

Removing the restriction to fixed wave number and angular velocity we can integrate over both to express a more general transverse wave propagation

\begin{aligned}F_t &= \int d\omega e^{-i\omega t} \int dk e^{ikz} \frac{i}{k^2 - \mu\epsilon\frac{\omega^2}{c^2}} \left(k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t {F_z}^{k,\omega} \end{aligned} \quad\quad\quad(3)

Inventing temporary notation for convenience we write for the frequency dependent weighting function {F_z}^{k,\omega} = F_z(x,y,k,\omega). Also note that explicit \pm k has been dropped after allowing wave number to range over both positive and negative values.

Observe that each of these integrals has the form of a Fourier integral (ignoring constant factors that can be incorporated into the weighting function). Also observe that the integral kernel has two poles on the real k-axis (or real \omega-axis). These can be utilized to evaluate one of the integrals using an upper half plane semicircular contour. FIXME: picture here.

Assuming that {F_z}^{k,\omega} is small enough at \infty (on the large contour) to be neglected, we have for the integral after integrating around the poles at k = \pm \sqrt{\mu\epsilon}{\left\lvert{\omega/c}\right\rvert}

\begin{aligned} F_t &= \pi i \int d\omega e^{-i\omega t} {\left.e^{ikz} \frac{i}{k - \sqrt{\mu\epsilon}\frac{{\left\lvert{\omega}\right\rvert}}{c}} \left(k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t {F_z}^{k,\omega}\right\vert}_{k=-\sqrt{\mu\epsilon}\frac{{\left\lvert{\omega}\right\rvert}}{c}} \\ &+ \pi i \int d\omega e^{-i\omega t} {\left.e^{ikz} \frac{i}{k + \sqrt{\mu\epsilon}\frac{{\left\lvert{\omega}\right\rvert}}{c}} \left(k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t {F_z}^{k,\omega}\right\vert}_{k=\sqrt{\mu\epsilon}\frac{{\left\lvert{\omega}\right\rvert}}{c}} \end{aligned}

Writing {F_z}^{\omega+} = F_z(x,y,\omega, k=\sqrt{\mu\epsilon}{\left\lvert{\omega}\right\rvert}/c), and {F_z}^{\omega-} = F_z(x,y,\omega, k=-\sqrt{\mu\epsilon}{\left\lvert{\omega}\right\rvert}/c), for the values of our field at the poles, we have

\begin{aligned} F_t &= \frac{\pi}{2}\int d\omega e^{-i\omega t} e^{-i \sqrt{\mu\epsilon}\frac{{\left\lvert{\omega}\right\rvert}}{c} z}\frac{\omega -\hat{\mathbf{z}}{\left\lvert{\omega}\right\rvert}}{{\left\lvert{\omega}\right\rvert}}\boldsymbol{\nabla}_t {F_z}^{\omega-} \\ &- \frac{\pi}{2}\int d\omega e^{-i\omega t} e^{i \sqrt{\mu\epsilon}\frac{{\left\lvert{\omega}\right\rvert}}{c} z} \frac{\omega +\hat{\mathbf{z}}{\left\lvert{\omega}\right\rvert}}{{\left\lvert{\omega}\right\rvert}}\boldsymbol{\nabla}_t {F_z}^{\omega+} \end{aligned}

Can one ignore the singularity at \omega = 0 since it divides out? If so, then we have

\begin{aligned} F_t &= \frac{\pi}{2}\int d\omega e^{-i\omega t} e^{-i \sqrt{\mu\epsilon}\frac{\left\lvert{\omega}\right\rvert}{c} z}(\text{sgn}(\omega) - \hat{\mathbf{z}})\boldsymbol{\nabla}_t {F_z}^{\omega-} \\ &- \frac{\pi}{2}\int d\omega e^{-i\omega t} e^{i \sqrt{\mu\epsilon}\frac{\left\lvert{\omega}\right\rvert}{c} z} (\text{sgn}(\omega) + \hat{\mathbf{z}})\boldsymbol{\nabla}_t {F_z}^{\omega+} \end{aligned}

Writing this out separately for the regions greater and lesser than zero we have

\begin{aligned} F_t &= \frac{\pi}{2}\int_{0+}^\infty d\omega e^{-i\omega t} e^{-i \sqrt{\mu\epsilon}\frac{\omega}{c} z}(1 - \hat{\mathbf{z}})\boldsymbol{\nabla}_t {F_z}^{\omega-} - \frac{\pi}{2}\int_{0+}^\infty d\omega e^{-i\omega t} e^{i \sqrt{\mu\epsilon}\frac{\omega}{c} z} (1 + \hat{\mathbf{z}})\boldsymbol{\nabla}_t {F_z}^{\omega+} \\ &- \frac{\pi}{2}\int_{-\infty}^{0-} d\omega e^{-i\omega t} e^{i \sqrt{\mu\epsilon}\frac{\omega}{c} z}(1 + \hat{\mathbf{z}})\boldsymbol{\nabla}_t {F_z}^{\omega-} - \frac{\pi}{2}\int_{-\infty}^{0-} d\omega e^{-i\omega t} e^{-i \sqrt{\mu\epsilon}\frac{\omega}{c} z} (-1 + \hat{\mathbf{z}})\boldsymbol{\nabla}_t {F_z}^{\omega+} \\  \end{aligned}

Grouping by exponentials of like sign, and integrating over a region that omits some neighborhood around the origin, we have eliminated the {\left\lvert{\omega}\right\rvert} factors

\begin{aligned}F_t &= (1 - \hat{\mathbf{z}}) \boldsymbol{\nabla}_t\int d\omega e^{-i\omega \left(t + \sqrt{\mu\epsilon}\frac{z}{c}\right)}\frac{\pi}{2}({F_z}^{\omega-}\theta(\omega) + {F_z}^{\omega+}\theta(-\omega)) \\ &-(1 + \hat{\mathbf{z}}) \boldsymbol{\nabla}_t\int d\omega e^{-i\omega \left(t - \sqrt{\mu\epsilon}\frac{z}{c}\right)}\frac{\pi}{2}({F_z}^{\omega-}\theta(\omega) + {F_z}^{\omega+}\theta(-\omega)) \end{aligned}

The Heaviside unit step function has been used to group things nicely over the doubled integration ranges, and what is left now can be observed to be Fourier transforms from the frequency to time domain. The transformed functions are to be evaluated at a shifted time in each case.

\begin{aligned}F_t &= (1 - \hat{\mathbf{z}}) \boldsymbol{\nabla}_t\frac{\pi}{2}{\left.\mathcal{F}\left( ({F_z}^{\omega-}\theta(\omega) + {F_z}^{\omega+}\theta(-\omega)) \right)\right\vert}_{t=t + \sqrt{\mu\epsilon}\frac{z}{c}} \\ &-(1 + \hat{\mathbf{z}}) \boldsymbol{\nabla}_t\frac{\pi}{2}{\left.\mathcal{F}\left( ({F_z}^{\omega-}\theta(\omega) + {F_z}^{\omega+}\theta(-\omega)) \right)\right\vert}_{t=t - \sqrt{\mu\epsilon}\frac{z}{c}} \end{aligned}

Attempting to actually evaluate this Fourier transform shouldn’t actually be necessary since we the original angular velocity and wave number domain function F(x,y,\omega,k) was arbitrary (except for its commutation with \hat{\mathbf{z}}). Instead we just suppose (once again overloading the symbol F_z) that we have a function F_z(x,y,u) that at time u takes the value

\begin{aligned}F_z(x,y,u) = \frac{\pi}{2}\mathcal{F}\left( ({F_z}^{\omega-}\theta(\omega) + {F_z}^{\omega+}\theta(-\omega)) \right) \end{aligned}

The transform will not change the grades of the propagation field F_z so this still commutes with \hat{\mathbf{z}}. We can now write

\begin{aligned}F_t &= (1 - \hat{\mathbf{z}}) \boldsymbol{\nabla}_t F_z(x,y, t + \sqrt{\mu\epsilon}{z}/{c})-(1 + \hat{\mathbf{z}}) \boldsymbol{\nabla}_t F_z(x,y, t - \sqrt{\mu\epsilon}{z}/{c}) \end{aligned} \quad\quad\quad(4)

So, after a bunch of manipulation, we find exactly how the transverse component of the field is related to the propagation direction field, and have eliminated the phasor description supplied our single frequency/wave-number field relations.

What we have left is a propagation field that has the form of an arbitrary unidirectional wave packet, traveling in either direction through the medium with speed \pm c/\sqrt{\mu\epsilon}. If all this math is right, the transverse field for an advancing wave is generated by application onto the propagation field of the operator

\begin{aligned}-(1 + \hat{\mathbf{z}}) \boldsymbol{\nabla}_t \end{aligned} \quad\quad\quad(5)

Similarly, the transverse field for a receding wave is given by application of the operator

\begin{aligned}(1 - \hat{\mathbf{z}}) \boldsymbol{\nabla}_t \end{aligned} \quad\quad\quad(6)


Given the doubt about the \omega =0 point in the integral, and the possibility for sign error and other algebraic mistakes it seems worthwhile now to go back to Maxwell’s equation and verify the correctness of these results. Then, presuming everything worked out okay, it would also be good to relate things back to the electric and magnetic field components of the field.

Maxwell’s equation in these units is

\begin{aligned}\left(\boldsymbol{\nabla}_t \pm i k \hat{\mathbf{z}} - \sqrt{\mu\epsilon}\frac{i\omega}{c}\right) F(x,y) = 0 \end{aligned} \quad\quad\quad(7)

and we want to compute the transverse direction projection from the propagation direction term

\begin{aligned}F_t &= \mathbf{E}_t + I \mathbf{B}_t = \frac{1}{{2}} (F - \hat{\mathbf{z}} F \hat{\mathbf{z}}) \\ F_z &= \mathbf{E}_z + I \mathbf{B}_z = \frac{1}{{2}} (F + \hat{\mathbf{z}} F \hat{\mathbf{z}})  \end{aligned} \quad\quad\quad(8)

Picking one of the terms of (4), we have

\begin{aligned}F_t = (1 + \pm (1 \mp \hat{\mathbf{z}}) \nabla_t) F_z(t \pm \sqrt{\mu\epsilon} z/c)  \end{aligned}

Substiting into the left hand operator of Maxwell’s equation (7) and reducing I get

\begin{aligned}((\hat{\mathbf{z}} \pm 1) {\nabla_t}^2 + \nabla_t) F_z + \frac{\mu\epsilon}{c}(1 \pm \hat{\mathbf{z}}) F_z' \end{aligned}

I don’t see how this would neccessarily equal zero?


[1] Peeter Joot. Transverse electric and magnetic fields [online].

Posted in Math and Physics Learning. | Tagged: , , , , , | 2 Comments »

Sumarizing: Transverse electric and magnetic fields

Posted by peeterjoot on August 1, 2009

[Click here for a PDF of this post with nicer formatting]

There’s potentially a lot of new ideas in the previous transverse field post (some for me even with previous exposure to the Geometric Algebra formalism). There was no real attempt to teach GA here, but for completeness the GA form of Maxwell’s equation was developed from the traditional divergence and curl formulation of Maxwell’s equations. That was mainly due to use of CGS units which differ since this makes Maxwell’s equation take a different form from the usual (see [1]).

This time a less exploratory summary of the previous results above is assembled.

In these CGS units our field F, and Maxwell’s equation (in absence of charge and current), take the form

\begin{aligned}F &= \mathbf{E} + \frac{I\mathbf{B}}{\sqrt{\mu\epsilon}} \\ 0 &= \left(\boldsymbol{\nabla} + \frac{\sqrt{\mu\epsilon}}{c}\partial_t\right) F  \end{aligned} \quad\quad\quad(30)

The electric and magnetic fields can be picked off by selecting the grade one (vector) components

\begin{aligned}\mathbf{E} &= {\left\langle{{F}}\right\rangle}_{1} \\ \mathbf{B} &= {\left\langle{{-I F}}\right\rangle}_{1} \end{aligned} \quad\quad\quad(32)

With an explicit sinusoidal and z-axis time dependence for the field

\begin{aligned}F(x,y,z,t) &= F(x,y) e^{\pm i k z - i \omega t}  \end{aligned} \quad\quad\quad(34)

and a split of the gradient into transverse and z-axis components \boldsymbol{\nabla} = \boldsymbol{\nabla}_t + \hat{\mathbf{z}} \partial_z, Maxwell’s equation takes the form

\begin{aligned}\left(\boldsymbol{\nabla}_t \pm i k \hat{\mathbf{z}} - \sqrt{\mu\epsilon}\frac{i\omega}{c}\right) F(x,y) = 0 \end{aligned} \quad\quad\quad(35)

Writing for short F = F(x,y), we can split the field into transverse and z-axis components with the commutator and anticommutator products respectively. For the z-axis components we have

\begin{aligned}F_z \hat{\mathbf{z}} \equiv E_z + I B_z = \frac{1}{{2}} (F \hat{\mathbf{z}} + \hat{\mathbf{z}} F)  \end{aligned} \quad\quad\quad(36)

The projections onto the z-axis and and transverse directions are respectively

\begin{aligned}F_z &= \mathbf{E}_z + I \mathbf{B}_z = \frac{1}{{2}} (F + \hat{\mathbf{z}} F \hat{\mathbf{z}}) \\ F_t &= \mathbf{E}_t + I \mathbf{B}_t = \frac{1}{{2}} (F - \hat{\mathbf{z}} F \hat{\mathbf{z}} ) \end{aligned} \quad\quad\quad(37)

With an application of the transverse gradient to the z-axis field we easily found the relation between the two
field components

\begin{aligned}\boldsymbol{\nabla}_t F_z &= i \left( \pm k \hat{\mathbf{z}} - \sqrt{\mu\epsilon}\frac{\omega}{c}\right) F_t \end{aligned} \quad\quad\quad(39)

A left division of the exponential factor gives the total transverse field

\begin{aligned}F_t &= \frac{1}{{i \left( \pm k \hat{\mathbf{z}} - \sqrt{\mu\epsilon}\frac{\omega}{c}\right) }} \boldsymbol{\nabla}_t F_z  \end{aligned} \quad\quad\quad(40)

Multiplication of both the numerator and denominator by the conjugate normalizes this

\begin{aligned}F_t &= \frac{i}{k^2 - \mu\epsilon\frac{\omega^2}{c^2}} \left( \pm k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t F_z \end{aligned} \quad\quad\quad(41)

From this the transverse electric and magnetic fields may be picked off using the projective grade selection operations of (32), and are

\begin{aligned}\mathbf{E}_t &= \frac{i}{\mu\epsilon\frac{\omega^2}{c^2} -k^2} \left( \pm k \boldsymbol{\nabla}_t E_z - \frac{\omega}{c} \hat{\mathbf{z}} \times \boldsymbol{\nabla}_t B_z \right) \\ \mathbf{B}_t &= \frac{i}{\mu\epsilon\frac{\omega^2}{c^2} -k^2} \left( {\mu\epsilon}\frac{\omega}{c} \hat{\mathbf{z}} \times \boldsymbol{\nabla}_t E_z \pm k \boldsymbol{\nabla}_t B_z \right)  \end{aligned} \quad\quad\quad(42)


[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »

Transverse electric and magnetic fields

Posted by peeterjoot on July 31, 2009

[Click here for a PDF of this post with nicer formatting]


In Eli’s Transverse Electric and Magnetic Fields in a Conducting Waveguide blog entry he works through the algebra calculating the transverse components, the perpendicular to the propagation direction components.

This should be possible using Geometric Algebra too, and trying this made for a good exercise.


The starting point can be the same, the source free Maxwell’s equations. Writing \partial_0 = (1/c) \partial/{\partial t}, we have

\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{E} &= 0 \\ \boldsymbol{\nabla} \cdot \mathbf{B} &= 0 \\ \boldsymbol{\nabla} \times \mathbf{E} &= - \partial_0 \mathbf{B} \\ \boldsymbol{\nabla} \times \mathbf{B} &= \mu \epsilon \partial_0 \mathbf{E} \end{aligned} \quad\quad\quad(1)

Multiplication of the last two equations by the spatial pseudoscalar I, and using I \mathbf{a} \times \mathbf{b} = \mathbf{a} \wedge \mathbf{b}, the curl equations can be written in their dual bivector form

\begin{aligned}\boldsymbol{\nabla} \wedge \mathbf{E} &= - \partial_0 I \mathbf{B} \\ \boldsymbol{\nabla} \wedge \mathbf{B} &= \mu \epsilon \partial_0 I \mathbf{E} \end{aligned} \quad\quad\quad(5)

Now adding the dot and curl equations using \mathbf{a} \mathbf{b} = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \wedge \mathbf{b} eliminates the cross products

\begin{aligned}\boldsymbol{\nabla} \mathbf{E} &= - \partial_0 I \mathbf{B} \\ \boldsymbol{\nabla} \mathbf{B} &= \mu \epsilon \partial_0 I \mathbf{E} \end{aligned} \quad\quad\quad(7)

These can be further merged without any loss, into the GA first order equation

\begin{aligned}\left(\boldsymbol{\nabla} + \frac{\sqrt{\mu\epsilon}}{c}\partial_t\right) \left(\mathbf{E} + \frac{I\mathbf{B}}{\sqrt{\mu\epsilon}} \right) = 0 \end{aligned} \quad\quad\quad(9)

We are really after solutions to the total multivector field F = \mathbf{E} + I \mathbf{B}/\sqrt{\mu\epsilon}. For this problem where separate electric and magnetic field components are desired, working from (7) is perhaps what we want?

Following Eli and Jackson, write \boldsymbol{\nabla} = \boldsymbol{\nabla}_t + \hat{\mathbf{z}} \partial_z, and

\begin{aligned}\mathbf{E}(x,y,z,t) &= \mathbf{E}(x,y) e^{\pm i k z - i \omega t} \\ \mathbf{B}(x,y,z,t) &= \mathbf{B}(x,y) e^{\pm i k z - i \omega t} \end{aligned} \quad\quad\quad(10)

Evaluating the z and t partials we have

\begin{aligned}(\boldsymbol{\nabla}_t \pm i k \hat{\mathbf{z}}) \mathbf{E}(x,y) &= \frac{i\omega}{c} I \mathbf{B}(x,y) \\ (\boldsymbol{\nabla}_t \pm i k \hat{\mathbf{z}}) \mathbf{B}(x,y) &= -\mu \epsilon \frac{i\omega}{c} I \mathbf{E}(x,y) \end{aligned} \quad\quad\quad(12)

For the remainder of these notes, the explicit (x,y) dependence will be assumed for \mathbf{E} and \mathbf{B}.

An obvious thing to try with these equations is just substitute one into the other. If that’s done we get the pair of second order harmonic equations

\begin{aligned}{\boldsymbol{\nabla}_t}^2\begin{pmatrix}\mathbf{E} \\  \mathbf{B} \end{pmatrix}= \left( k^2 - \mu \epsilon \frac{\omega^2}{c^2} \right)\begin{pmatrix}\mathbf{E} \\  \mathbf{B} \end{pmatrix} \end{aligned} \quad\quad\quad(14)

One could consider the problem solved here. Separately equating both sides of this equation to zero, we have the k^2 = \mu\epsilon \omega^2/c^2 constraint on the wave number and angular velocity, and the second order Laplacian on the left hand side is solved by the real or imaginary parts of any analytic function. Especially when one considers that we are after a multivector field that of intrinsic complex nature.

However, that is not really what we want as a solution. Doing the same on the unified Maxwell equation (9), we have

\begin{aligned}\left(\boldsymbol{\nabla}_t \pm i k \hat{\mathbf{z}} - \sqrt{\mu\epsilon}\frac{i\omega}{c}\right) \left(\mathbf{E} + \frac{I\mathbf{B}}{\sqrt{\mu\epsilon}} \right) = 0 \end{aligned} \quad\quad\quad(15)

Selecting scalar, vector, bivector and trivector grades of this equation produces the following respective relations between the various components

\begin{aligned}0 = \left\langle{{\cdots}}\right\rangle &= \boldsymbol{\nabla}_t \cdot \mathbf{E} \pm i k \hat{\mathbf{z}} \cdot \mathbf{E} \\ 0 = {\left\langle{{\cdots}}\right\rangle}_{1} &= I \boldsymbol{\nabla}_t \wedge \mathbf{B}/\sqrt{\mu\epsilon} \pm i I k \hat{\mathbf{z}} \wedge \mathbf{B}/\sqrt{\mu\epsilon} - i \sqrt{\mu\epsilon}\frac{\omega}{c} \mathbf{E} \\ 0 = {\left\langle{{\cdots}}\right\rangle}_{2} &= \boldsymbol{\nabla}_t \wedge \mathbf{E} \pm i k \hat{\mathbf{z}} \wedge \mathbf{E} - i \frac{\omega}{c} I \mathbf{B} \\ 0 = {\left\langle{{\cdots}}\right\rangle}_{3} &= I \boldsymbol{\nabla}_t \cdot \mathbf{B}/\sqrt{\mu\epsilon} \pm i I k \hat{\mathbf{z}} \cdot \mathbf{B}/\sqrt{\mu\epsilon} \end{aligned} \quad\quad\quad(16)

From the scalar and pseudoscalar grades we have the propagation components in terms of the transverse ones

\begin{aligned}E_z &= \frac{\pm i}{k} \boldsymbol{\nabla}_t \cdot \mathbf{E}_t \\ B_z &= \frac{\pm i}{k} \boldsymbol{\nabla}_t \cdot \mathbf{B}_t  \end{aligned} \quad\quad\quad(20)

But this is the opposite of the relations that we are after. On the other hand from the vector and bivector grades we have

\begin{aligned}i \frac{\omega}{c} \mathbf{E} &= -\frac{1}{{\mu\epsilon}}\left(\boldsymbol{\nabla}_t \times \mathbf{B}_z \pm i k \hat{\mathbf{z}} \times \mathbf{B}_t\right) \\ i \frac{\omega}{c} \mathbf{B} &= \boldsymbol{\nabla}_t \times \mathbf{E}_z \pm i k \hat{\mathbf{z}} \times \mathbf{E}_t \end{aligned} \quad\quad\quad(22)

A clue from the final result.

From (22) and a lot of messy algebra we should be able to get the transverse equations. Is there a slicker way? The end result that Eli obtained suggests a path. That result was

\begin{aligned}\mathbf{E}_t = \frac{i}{\mu\epsilon \frac{\omega^2}{c^2} - k^2} \left( \pm k \boldsymbol{\nabla}_t E_z - \frac{\omega}{c} \hat{\mathbf{z}} \times \boldsymbol{\nabla}_t B_z \right) \end{aligned} \quad\quad\quad(24)

The numerator looks like it can be factored, and after a bit of playing around a suitable factorization can be obtained:

\begin{aligned}{\left\langle{{ \left( \pm k + \frac{\omega}{c} \hat{\mathbf{z}} \right) \boldsymbol{\nabla}_t \hat{\mathbf{z}} \left( \mathbf{E}_z + I \mathbf{B}_z \right) }}\right\rangle}_{1}&={\left\langle{{ \left( \pm k + \frac{\omega}{c} \hat{\mathbf{z}} \right) \boldsymbol{\nabla}_t \left( E_z + I B_z \right) }}\right\rangle}_{1} \\ &=\pm k \boldsymbol{\nabla} E_z + \frac{\omega}{c} {\left\langle{{ I \hat{\mathbf{z}} \boldsymbol{\nabla}_t B_z }}\right\rangle}_{1} \\ &=\pm k \boldsymbol{\nabla} E_z + \frac{\omega}{c} I \hat{\mathbf{z}} \wedge \boldsymbol{\nabla}_t B_z \\ &=\pm k \boldsymbol{\nabla} E_z - \frac{\omega}{c} \hat{\mathbf{z}} \times \boldsymbol{\nabla}_t B_z \\  \end{aligned}

Observe that the propagation components of the field \mathbf{E}_z + I\mathbf{E}_z can be written in terms of the symmetric product

\begin{aligned}\frac{1}{{2}} \left( \hat{\mathbf{z}} (\mathbf{E} + I\mathbf{B}) + (\mathbf{E} + I\mathbf{B}) \hat{\mathbf{z}} \right)&=\frac{1}{{2}} \left( \hat{\mathbf{z}} \mathbf{E} + \mathbf{E} \hat{\mathbf{z}} \right) + \frac{I}{2} \left( \hat{\mathbf{z}} \mathbf{B} + \mathbf{B} \hat{\mathbf{z}} + I \right) \\ &=\hat{\mathbf{z}} \cdot \mathbf{E} + I \hat{\mathbf{z}} \cdot \mathbf{B} \end{aligned}

Now the total field in CGS units was actually F = \mathbf{E} + I \mathbf{B}/\sqrt{\mu\epsilon}, not F = \mathbf{E} + I \mathbf{B}, so the factorization above isn’t exactly what we want. It does however, provide the required clue. We probably get the result we want by forming the symmetric product (a hybrid dot product selecting both the vector and bivector terms).

Symmetric product of the field with the direction vector.

Rearranging Maxwell’s equation (15) in terms of the transverse gradient and the total field F we have

\begin{aligned}\boldsymbol{\nabla}_t F = \left( \mp i k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{i\omega}{c}\right) F \end{aligned} \quad\quad\quad(25)

With this our symmetric product is

\begin{aligned}\boldsymbol{\nabla}_t ( F \hat{\mathbf{z}} + \hat{\mathbf{z}} F) &= (\boldsymbol{\nabla}_t F) \hat{\mathbf{z}} - \hat{\mathbf{z}} (\boldsymbol{\nabla}_t F) \\ &=\left( \mp i k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{i\omega}{c}\right) F \hat{\mathbf{z}}- \hat{\mathbf{z}} \left( \mp i k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{i\omega}{c}\right) F \\ &=i \left( \mp k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) (F \hat{\mathbf{z}} - \hat{\mathbf{z}} F) \\  \end{aligned}

The antisymmetric product on the right hand side should contain the desired transverse field components. To verify multiply it out

\begin{aligned}\frac{1}{{2}}(F \hat{\mathbf{z}} - \hat{\mathbf{z}} F)  &=\frac{1}{{2}}\left( \left(\mathbf{E} + I \mathbf{B}/\sqrt{\mu\epsilon}\right) \hat{\mathbf{z}} - \hat{\mathbf{z}} \left(\mathbf{E} + I \mathbf{B}/\sqrt{\mu\epsilon}\right) \right)  \\ &=\mathbf{E} \wedge \hat{\mathbf{z}} + I \mathbf{B}/\sqrt{\mu\epsilon} \wedge \hat{\mathbf{z}} \\ &=(\mathbf{E}_t + I \mathbf{B}_t/\sqrt{\mu\epsilon}) \hat{\mathbf{z}} \\  \end{aligned}

Now, with multiplication by the conjugate quantity -i(\pm k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\omega/c), we can extract these transverse components.

\begin{aligned}\left( \pm k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \left( \mp k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) (F \hat{\mathbf{z}} - \hat{\mathbf{z}} F) &=\left( -k^2 + {\mu\epsilon}\frac{\omega^2}{c^2}\right) (F \hat{\mathbf{z}} - \hat{\mathbf{z}} F)  \end{aligned}

Rearranging, we have the transverse components of the field

\begin{aligned}(\mathbf{E}_t + I \mathbf{B}_t/\sqrt{\mu\epsilon}) \hat{\mathbf{z}} &=\frac{i}{k^2 - \mu\epsilon\frac{\omega^2}{c^2}} \left( \pm k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t \frac{1}{{2}}( F \hat{\mathbf{z}} + \hat{\mathbf{z}} F)  \end{aligned} \quad\quad\quad(26)

With left multiplication by \hat{\mathbf{z}}, and writing F = F_t + F_z we have

\begin{aligned}F_t &= \frac{i}{k^2 - \mu\epsilon\frac{\omega^2}{c^2}} \left( \pm k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t F_z \end{aligned} \quad\quad\quad(27)

While this is a complete solution, we can additionally extract the electric and magnetic fields to compare results with Eli’s calculation. We take
vector grades to do so with \mathbf{E}_t = {\left\langle{{F_t}}\right\rangle}_{1}, and \mathbf{B}_t/\sqrt{\mu\epsilon} = {\left\langle{{-I F_t}}\right\rangle}_{1}. For the transverse electric field

\begin{aligned}{\left\langle{{ \left( \pm k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t (\mathbf{E}_z + I \mathbf{B}_z/\sqrt{/\mu\epsilon}) }}\right\rangle}_{1} &=\pm k \hat{\mathbf{z}} (-\hat{\mathbf{z}}) \boldsymbol{\nabla}_t E_z + \frac{\omega}{c} \underbrace{{\left\langle{{I \boldsymbol{\nabla}_t \hat{\mathbf{z}}}}\right\rangle}_{1}}_{-I^2 \hat{\mathbf{z}} \times \boldsymbol{\nabla}_t } B_z \\ &=\mp k \boldsymbol{\nabla}_t E_z + \frac{\omega}{c} \hat{\mathbf{z}} \times \boldsymbol{\nabla}_t B_z \\  \end{aligned}

and for the transverse magnetic field

\begin{aligned}{\left\langle{{ -I \left( \pm k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t (\mathbf{E}_z + I \mathbf{B}_z/\sqrt{\mu\epsilon}) }}\right\rangle}_{1} &=-I \sqrt{\mu\epsilon}\frac{\omega}{c} \boldsymbol{\nabla}_t \mathbf{E}_z+{\left\langle{{ \left( \pm k \hat{\mathbf{z}} + \sqrt{\mu\epsilon}\frac{\omega}{c}\right) \boldsymbol{\nabla}_t \mathbf{B}_z/\sqrt{\mu\epsilon} }}\right\rangle}_{1}  \\ &=- \sqrt{\mu\epsilon}\frac{\omega}{c} \hat{\mathbf{z}} \times \boldsymbol{\nabla}_t E_z\mp k \boldsymbol{\nabla}_t B_z/\sqrt{\mu\epsilon} \\  \end{aligned}

Thus the split of transverse field into the electric and magnetic components yields

\begin{aligned}\mathbf{E}_t &= \frac{i}{k^2 - \mu\epsilon\frac{\omega^2}{c^2}} \left( \mp k \boldsymbol{\nabla}_t E_z + \frac{\omega}{c} \hat{\mathbf{z}} \times \boldsymbol{\nabla}_t B_z \right) \\ \mathbf{B}_t &= \frac{i}{k^2 - \mu\epsilon\frac{\omega^2}{c^2}} \left( - {\mu\epsilon}\frac{\omega}{c} \hat{\mathbf{z}} \times \boldsymbol{\nabla}_t E_z \mp k \boldsymbol{\nabla}_t B_z \right)  \end{aligned} \quad\quad\quad(28)

Compared to Eli’s method using messy traditional vector algebra, this method also has a fair amount of messy tricky algebra, but of a different sort.

Posted in Math and Physics Learning. | Tagged: , , , , , | Leave a Comment »

Space time algebra solutions of the Maxwell equation for discrete frequencies.

Posted by peeterjoot on July 11, 2009

[Click here for a PDF of this post with nicer formatting]


How to obtain solutions to Maxwell’s equations in vacuum is well known. The aim here is to explore the same problem starting with the Geometric Algebra (GA) formalism ([1]) of the Maxwell equation.

\begin{aligned}\nabla F &= J/\epsilon_0 c \\ F &= \nabla \wedge A = \mathbf{E} + i c \mathbf{B} \end{aligned} \quad\quad\quad(1)

A Fourier transformation attack on the equation should be possible, so let’s see what falls out doing so.

Fourier problem.

Picking an observer bias for the gradient by premultiplying with \gamma_0 the vacuum equation for light can therefore also be written as

\begin{aligned}0&= \gamma_0 \nabla F \\ &= \gamma_0 (\gamma^0 \partial_0 + \gamma^k \partial_k) F \\ &= (\partial_0 - \gamma^k \gamma_0 \partial_k) F \\ &= (\partial_0 + \sigma^k \partial_k) F \\ &= \left(\frac{1}{c}\partial_t + \boldsymbol{\nabla} \right) F \\  \end{aligned}

A Fourier transformation of this equation produces

\begin{aligned}0 &= \frac{1}{c} \frac{\partial F}{\partial t}(\mathbf{k},t) + \frac{1}{(\sqrt{2\pi})^3} \int \sigma^m \partial_m F(\mathbf{x},t) e^{-i \mathbf{k} \cdot \mathbf{x}} d^3 x \end{aligned}

and with a single integration by parts one has

\begin{aligned}0&= \frac{1}{c} \frac{\partial F}{\partial t}(\mathbf{k},t) - \frac{1}{(\sqrt{2\pi})^3} \int \sigma^m F(\mathbf{x},t) (-i k_m) e^{-i \mathbf{k} \cdot \mathbf{x}} d^3 x \\ &= \frac{1}{c} \frac{\partial F}{\partial t}(\mathbf{k},t) + \frac{1}{(\sqrt{2\pi})^3} \int \mathbf{k} F(\mathbf{x},t) i e^{-i \mathbf{k} \cdot \mathbf{x}} d^3 x \\ &= \frac{1}{c} \frac{\partial F}{\partial t}(\mathbf{k},t) + i \mathbf{k} \hat{F}(\mathbf{k},t) \end{aligned}

The flexibility to employ the pseudoscalar as the imaginary i = \gamma_0 \gamma_1 \gamma_2 \gamma_3 has been employed above, so it should be noted that pseudoscalar commutation with Dirac bivectors was implied above, but also that we do not have the flexibility to commute \mathbf{k} with F.

Having done this, the problem to solve is now Maxwell’s vacuum equation in the frequency domain

\begin{aligned}\frac{\partial F}{\partial t}(\mathbf{k},t) = -i c \mathbf{k} \hat{F}(\mathbf{k},t) \end{aligned}

Introducing an angular frequency (spatial) bivector, and its vector dual

\begin{aligned}\Omega &= -i c \mathbf{k} \\ \boldsymbol{\omega} &= c \mathbf{k} \end{aligned}

This becomes

\begin{aligned}\hat{F}' = \Omega F \end{aligned} \quad\quad\quad(5)

With solution

\begin{aligned}\hat{F} = e^{\Omega t} \hat{F}(\mathbf{k},0) \end{aligned}

Differentiation with respect to time verifies that the ordering of the terms is correct and this does in fact solve (5). This is something we have to be careful of due to the possibility of non-commuting variables.

Back substitution into the inverse transform now supplies the time evolution of the field given the initial time specification

\begin{aligned}F(\mathbf{x},t)&= \frac{1}{(\sqrt{2\pi})^3} \int e^{\Omega t} \hat{F}(\mathbf{k},0) e^{i \mathbf{k} \cdot \mathbf{x}} d^3 k \\ &= \frac{1}{(2\pi)^3} \int e^{\Omega t} \left( \int {F}(\mathbf{x}',0) e^{-i \mathbf{k} \cdot \mathbf{x}'} d^3 x' \right) e^{i \mathbf{k} \cdot \mathbf{x}} d^3 k \end{aligned}

Observe that Pseudoscalar exponentials commute with the field because i commutes with spatial vectors and itself

\begin{aligned}F e^{i\theta}&= (\mathbf{E} + i c \mathbf{B}) (C + iS) \\ &=C (\mathbf{E} + i c \mathbf{B})+ S (\mathbf{E} + i c \mathbf{B}) i  \\ &=C (\mathbf{E} + i c \mathbf{B})+ S i (\mathbf{E} + i c \mathbf{B}) \\ &=e^{i\theta} F \end{aligned}

This allows the specifics of the initial time conditions to be suppressed

\begin{aligned}F(\mathbf{x},t) &= \int d^3 k e^{\Omega t} e^{i \mathbf{k} \cdot \mathbf{x}} \int \frac{1}{(2\pi)^3} {F}(\mathbf{x}',0) e^{-i \mathbf{k} \cdot \mathbf{x}'}  d^3 x' \end{aligned}

The interior integral has the job of a weighting function over plane wave solutions, and this can be made explicit writing

\begin{aligned}D(\mathbf{k}) &= \frac{1}{(2\pi)^3} \int {F}(\mathbf{x}',0) e^{-i \mathbf{k} \cdot \mathbf{x}'}  d^3 x' \\ F(\mathbf{x},t) &= \int e^{\Omega t} e^{i \mathbf{k} \cdot \mathbf{x}} D(\mathbf{k}) d^3 k \end{aligned}

Many assumptions have been made here, not the least of which was a requirement for the Fourier transform of a bivector valued function to be meaningful, and have an inverse. It is therefore reasonable to verify that this weighted plane wave result is in fact a solution to the original Maxwell vacuum equation. Differentiation verifies that things are okay so far

\begin{aligned}\gamma_0 \nabla F(\mathbf{x},t)&=\left(\frac{1}{c}\partial_t + \boldsymbol{\nabla} \right)\int e^{\Omega t} e^{i \mathbf{k} \cdot \mathbf{x}} D(\mathbf{k}) d^3 k \\ &=\int \left(\frac{1}{c}\Omega e^{\Omega t} + \sigma^m e^{\Omega t} i k_m \right) e^{i \mathbf{k} \cdot \mathbf{x}} D(\mathbf{k}) d^3 k \\ &=\int \left(\frac{1}{c}(-i \mathbf{k} c) + i \mathbf{k} \right) e^{\Omega t} e^{i \mathbf{k} \cdot \mathbf{x}} D(\mathbf{k}) d^3 k \\ &= 0 \quad\quad\quad\square \end{aligned}

Discretizing and grade restrictions.

The fact that it the integral has zero gradient does not mean that it is a bivector, so there must also be at least also be restrictions on the grades of D(\mathbf{k}).

To simplify discussion, let’s discretize the integral writing

\begin{aligned}D(\mathbf{k}') = D_\mathbf{k} \delta^3 (\mathbf{k} - \mathbf{k}') \end{aligned}

So we have

\begin{aligned}F(\mathbf{x},t)&= \int e^{\Omega t} e^{i \mathbf{k}' \cdot \mathbf{x}} D(\mathbf{k}') d^3 k' \\ &= \int e^{\Omega t} e^{i \mathbf{k}' \cdot \mathbf{x}} D_\mathbf{k} \delta^3(\mathbf{k} - \mathbf{k}') d^3 k' \\  \end{aligned}

This produces something planewave-ish

\begin{aligned}F(\mathbf{x},t) &= e^{\Omega t} e^{i \mathbf{k} \cdot \mathbf{x}} D_\mathbf{k} \end{aligned} \quad\quad\quad(10)

Observe that at t=0 we have

\begin{aligned}F(\mathbf{x},0)&= e^{i \mathbf{k} \cdot \mathbf{x}} D_\mathbf{k}  \\ &= (\cos (\mathbf{k} \cdot \mathbf{x}) + i \sin(\mathbf{k} \cdot \mathbf{x})) D_\mathbf{k}  \\  \end{aligned}

There is therefore a requirement for D_\mathbf{k} to be either a spatial vector or its dual, a spatial bivector. For example taking D_k to be a spatial vector we can then identify the electric and magnetic components of the field

\begin{aligned}\mathbf{E}(\mathbf{x},0) &= \cos (\mathbf{k} \cdot \mathbf{x}) D_\mathbf{k} \\ c \mathbf{B}(\mathbf{x},0) &= \sin (\mathbf{k} \cdot \mathbf{x}) D_\mathbf{k} \end{aligned}

and if D_k is taken to be a spatial bivector, this pair of identifications would be inverted.

Considering (10) at \mathbf{x}=0, we have

\begin{aligned}F(0, t)&= e^{\Omega t} D_\mathbf{k} \\ &= (\cos({\left\lvert{\Omega}\right\rvert} t) + \hat{\Omega} \sin({\left\lvert{\Omega}\right\rvert} t)) D_\mathbf{k} \\ &= (\cos({\left\lvert{\Omega}\right\rvert} t) - i \hat{\mathbf{k}} \sin({\left\lvert{\Omega}\right\rvert} t)) D_\mathbf{k} \\  \end{aligned}

If D_\mathbf{k} is first assumed to be a spatial vector, then F would have a pseudoscalar component if D_\mathbf{k} has any component parallel to \hat{\mathbf{k}}.

\begin{aligned}D_\mathbf{k} \in \span\{\sigma^m\} \implies D_\mathbf{k} \cdot \hat{\mathbf{k}} = 0 \end{aligned} \quad\quad\quad(11)

\begin{aligned}D_\mathbf{k} \in \span\{\sigma^a \wedge \sigma^b\} \implies D_\mathbf{k} \cdot (i\hat{\mathbf{k}}) = 0 \end{aligned} \quad\quad\quad(12)

Since we can convert between the spatial vector and bivector cases using a duality transformation, there may not appear to be any loss of generality imposing a spatial vector restriction on D_\mathbf{k}, at least in this current free case. However, an attempt to do so leads to trouble. In particular, this leads to collinear electric and magnetic fields, and thus the odd seeming condition where the field energy density is non-zero but the field momentum density (Poynting vector \mathbf{P} \propto \mathbf{E} \times \mathbf{B}) is zero. In retrospect being forced down the path of including both grades is not unreasonable, especially since this gives D_\mathbf{k} precisely the form of the field itself F = \mathbf{E} + i c \mathbf{B}.

Electric and Magnetic field split.

With the basic form of the Maxwell vacuum solution determined, we are now ready to start extracting information from the solution and making comparisons with the more familiar vector form. To start doing the phasor form of the fundamental solution can be expanded explicitly in terms of two arbitrary spatial parametrization vectors \mathbf{E}_\mathbf{k} and \mathbf{B}_\mathbf{k}.

\begin{aligned}F &= e^{-i\boldsymbol{\omega} t} e^{i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) \end{aligned} \quad\quad\quad(13)

Whether these parametrization vectors have any relation to electric and magnetic fields respectively will have to be determined, but making that assumption for now to label these uniquely doesn’t seem unreasonable.

From (13) we can compute the electric and magnetic fields by the conjugate relations (25). Our conjugate is

\begin{aligned}F^\dagger&= (\mathbf{E}_\mathbf{k} - i c \mathbf{B}_\mathbf{k}) e^{-i \mathbf{k} \cdot \mathbf{x}} e^{i\boldsymbol{\omega} t} \\ &=e^{-i\boldsymbol{\omega} t}e^{-i \mathbf{k} \cdot \mathbf{x}}(\mathbf{E}_\mathbf{k} - i c \mathbf{B}_\mathbf{k}) \end{aligned}

Thus for the electric field

\begin{aligned}F + F^\dagger&=e^{-i\boldsymbol{\omega} t} \left( e^{i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k})+e^{-i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} - i c \mathbf{B}_\mathbf{k})\right) \\ &=e^{-i\boldsymbol{\omega} t} \left( 2 \cos(\mathbf{k} \cdot \mathbf{x}) \mathbf{E}_\mathbf{k}+ i c (2 i) \sin(\mathbf{k} \cdot \mathbf{x}) \mathbf{B}_\mathbf{k}\right) \\ &=2 \cos(\omega t) \left( \cos(\mathbf{k} \cdot \mathbf{x}) \mathbf{E}_\mathbf{k}- c \sin(\mathbf{k} \cdot \mathbf{x}) \mathbf{B}_\mathbf{k}\right) \\ &+ 2\sin(\omega t)\hat{\mathbf{k}} \times\left( \cos(\mathbf{k} \cdot \mathbf{x}) \mathbf{E}_\mathbf{k}- c \sin(\mathbf{k} \cdot \mathbf{x}) \mathbf{B}_\mathbf{k}\right) \\  \end{aligned}

So for the electric field \mathbf{E} = \frac{1}{2}(F + F^\dagger) we have

\begin{aligned}\mathbf{E} &=\left( \cos(\omega t) + \sin(\omega t) \hat{\mathbf{k}} \times \right)\left( \cos(\mathbf{k} \cdot \mathbf{x}) \mathbf{E}_\mathbf{k}- c \sin(\mathbf{k} \cdot \mathbf{x}) \mathbf{B}_\mathbf{k}\right) \end{aligned} \quad\quad\quad(14)

Similarly for the magnetic field we have

\begin{aligned}F - F^\dagger&=e^{-i\boldsymbol{\omega} t} \left( e^{i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k})-e^{-i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} - i c \mathbf{B}_\mathbf{k})\right) \\ &=e^{-i\boldsymbol{\omega} t} \left( 2 i \sin(\mathbf{k} \cdot \mathbf{x}) \mathbf{E}_\mathbf{k}+ 2 i c \cos(\mathbf{k} \cdot \mathbf{x}) \mathbf{B}_\mathbf{k}\right) \\  \end{aligned}

This gives c \mathbf{B} = \frac{1}{2i}(F - F^\dagger) we have

\begin{aligned}c \mathbf{B} &=\left( \cos(\omega t) + \sin(\omega t) \hat{\mathbf{k}} \times \right)\left( \sin(\mathbf{k} \cdot \mathbf{x}) \mathbf{E}_\mathbf{k}+ c \cos(\mathbf{k} \cdot \mathbf{x}) \mathbf{B}_\mathbf{k}\right) \end{aligned} \quad\quad\quad(15)

Observe that the action of the time dependent phasor has been expressed, somewhat abusively and sneakily, in a scalar plus cross product operator form. The end result, when applied to a vector perpendicular to \hat{\mathbf{k}}, is still a vector

\begin{aligned}e^{-i\boldsymbol{\omega} t} \mathbf{a}&=\left( \cos(\omega t) + \sin(\omega t) \hat{\mathbf{k}} \times \right) \mathbf{a} \end{aligned}

Also observe that the Hermitian conjugate split of the total field bivector F produces vectors \mathbf{E} and \mathbf{B}, not phasors. There is no further need to take real or imaginary parts nor treat the phasor (13) as an artificial mathematical construct used for convenience only.

Polar Form.

Suppose an explicit polar form is introduced for the plane vectors \mathbf{E}_\mathbf{k}, and \mathbf{B}_\mathbf{k}. Let

\begin{aligned}\mathbf{E}_\mathbf{k} &= E {\hat{\mathbf{E}}_k} \\ \mathbf{B}_\mathbf{k} &= B {\hat{\mathbf{E}}_k} e^{i\hat{\mathbf{k}} \theta} \end{aligned}

Then for the field we have

\begin{aligned}F &= e^{-i\boldsymbol{\omega} t} e^{i \mathbf{k} \cdot \mathbf{x}} (E + i c B e^{-i\hat{\mathbf{k}} \theta}) \hat{\mathbf{E}}_k \end{aligned} \quad\quad\quad(16)

For the conjugate

\begin{aligned}F^\dagger&=\hat{\mathbf{E}}_k(E - i c B e^{i\hat{\mathbf{k}} \theta})e^{-i \mathbf{k} \cdot \mathbf{x}}e^{i\boldsymbol{\omega} t} \\ &=e^{-i\boldsymbol{\omega} t} e^{-i \mathbf{k} \cdot \mathbf{x}} (E - i c B e^{-i\hat{\mathbf{k}} \theta}) \hat{\mathbf{E}}_k \end{aligned}

So, in the polar form we have for the electric, and magnetic fields

\begin{aligned}\mathbf{E} &= e^{-i\boldsymbol{\omega} t} (E \cos(\mathbf{k} \cdot \mathbf{x}) - c B \sin(\mathbf{k} \cdot \mathbf{x}) e^{-i \hat{\mathbf{k}}\theta}) \hat{\mathbf{E}}_k \\ c \mathbf{B} &= e^{-i\boldsymbol{\omega} t} (E \sin(\mathbf{k} \cdot \mathbf{x}) + c B \cos(\mathbf{k} \cdot \mathbf{x}) e^{-i \hat{\mathbf{k}}\theta}) \hat{\mathbf{E}}_k \end{aligned} \quad\quad\quad(17)

Observe when \theta is an integer multiple of \pi, \mathbf{E} and \mathbf{B} are colinear, having the zero Poynting vector mentioned previously.
Now, for arbitrary \theta it does not appear that there is any inherent perpendicularity between the electric and magnetic fields. It is common
to read of light being the propagation of perpendicular fields, both perpendicular to the propagation direction. We have perpendicularity to the
propagation direction by virtue of requiring that the field be a (Dirac) bivector, but it does not look like the solution requires any inherent perpendicularity for the field components. It appears that a normal triplet of field vectors and propagation directions must actually be a special case.
Intuition says that this freedom to pick different magnitude or angle between \mathbf{E}_\mathbf{k} and \mathbf{B}_\mathbf{k} in the plane perpendicular to the transmission direction may correspond to different mixes of linear, circular, and elliptic polarization, but this has to be confirmed.

Working towards confirming (or disproving) this intuition, lets find the constraints on the fields that lead to normal electric and magnetic fields. This should follow by taking dot products

\begin{aligned}\mathbf{E} \cdot \mathbf{B} c&=\left\langle{e^{-i\boldsymbol{\omega} t} (E \cos(\mathbf{k} \cdot \mathbf{x}) - c B \sin(\mathbf{k} \cdot \mathbf{x}) e^{-i \hat{\mathbf{k}}\theta}) \hat{\mathbf{E}}_k\hat{\mathbf{E}}_ke^{i\boldsymbol{\omega} t} (E \sin(\mathbf{k} \cdot \mathbf{x}) + c B \cos(\mathbf{k} \cdot \mathbf{x}) e^{i \hat{\mathbf{k}}\theta})}\right\rangle \\ &=\left\langle{(E \cos(\mathbf{k} \cdot \mathbf{x}) - c B \sin(\mathbf{k} \cdot \mathbf{x}) e^{-i \hat{\mathbf{k}}\theta})(E \sin(\mathbf{k} \cdot \mathbf{x}) + c B \cos(\mathbf{k} \cdot \mathbf{x}) e^{i \hat{\mathbf{k}}\theta})}\right\rangle \\ &=(E^2 - c^2 B^2) \cos(\mathbf{k} \cdot \mathbf{x}) \sin(\mathbf{k} \cdot \mathbf{x})+ c E B\left\langle{\cos^2(\mathbf{k} \cdot \mathbf{x}) e^{i \hat{\mathbf{k}} \theta}-\sin^2(\mathbf{k} \cdot \mathbf{x}) e^{-i \hat{\mathbf{k}} \theta}}\right\rangle \\ &=(E^2 - c^2 B^2) \cos(\mathbf{k} \cdot \mathbf{x}) \sin(\mathbf{k} \cdot \mathbf{x})+ c E B \cos(\theta) ( \cos^2(\mathbf{k} \cdot \mathbf{x}) -\sin^2(\mathbf{k} \cdot \mathbf{x}) ) \\ &=(E^2 - c^2 B^2) \cos(\mathbf{k} \cdot \mathbf{x}) \sin(\mathbf{k} \cdot \mathbf{x})+ c E B \cos(\theta) ( \cos^2(\mathbf{k} \cdot \mathbf{x}) -\sin^2(\mathbf{k} \cdot \mathbf{x}) ) \\ &=\frac{1}{2} (E^2 - c^2 B^2) \sin(2 \mathbf{k} \cdot \mathbf{x})+ c E B \cos(\theta) \cos(2 \mathbf{k} \cdot \mathbf{x}) \\  \end{aligned}

The only way this can be zero for any \mathbf{x} is if the left and right terms are separately zero, which means

\begin{aligned}{\left\lvert{\mathbf{E}_k}\right\rvert} &= c {\left\lvert{\mathbf{B}_k}\right\rvert} \\ \theta &= \frac{\pi}{2} + n \pi \end{aligned}

This simplifies the phasor considerably, leaving

\begin{aligned}E + i c B e^{-i\hat{\mathbf{k}} \theta}&=E(1 + i (\mp i\hat{\mathbf{k}} )) \\ &=E(1 \pm \hat{\mathbf{k}}) \end{aligned}

So the field is just

\begin{aligned}F = e^{-i \boldsymbol{\omega} t} e^{i \mathbf{k} \cdot \mathbf{x}} (1 \pm \hat{\mathbf{k}}) \mathbf{E}_\mathbf{k} \end{aligned}

Using this, and some regrouping, a calculation of the field components yields

\begin{aligned}\mathbf{E} &= e^{i \hat{\mathbf{k}}( \pm \mathbf{k} \cdot \mathbf{x} -\omega t )} \mathbf{E}_\mathbf{k} \\ c \mathbf{B} &= \pm e^{i \hat{\mathbf{k}}( \pm \mathbf{k} \cdot \mathbf{x} -\omega t )} i \mathbf{k} \mathbf{E}_\mathbf{k} \end{aligned}

Observe that i\mathbf{k} rotates any vector in the plane perpendicular to \hat{\mathbf{k}} by 90 degrees, so we have here c \mathbf{B} = \pm \hat{\mathbf{k}} \times \mathbf{E}, the coupling constraint on the fields for linearly polarized plane waves. Without the constraint \mathbf{E} \cdot \mathbf{B} = 0, it appears that we cannot have such a simple coupling between the field components.

Energy and momentum for the phasor

To calculate the field energy density we can work with the two fields of equations (17), or work with the phasor (13) directly. From the phasor and the energy-momentum four vector (28) we have for the energy density

\begin{aligned}U &= T(\gamma_0) \cdot \gamma_0 \\ &= \frac{-\epsilon_0}{2}\left\langle  F \gamma_0 F \gamma_0  \right\rangle \\ &= \frac{-\epsilon_0}{2}\left\langle{e^{-i\boldsymbol{\omega} t} e^{i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) \gamma_0 e^{-i\boldsymbol{\omega} t} e^{i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) \gamma_0 }\right\rangle \\ &= \frac{-\epsilon_0}{2}\left\langle{e^{-i\boldsymbol{\omega} t} e^{i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) (\gamma_0)^2 e^{-i\boldsymbol{\omega} t} e^{-i \mathbf{k} \cdot \mathbf{x}} (-\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) }\right\rangle \\ &= \frac{-\epsilon_0}{2}\left\langle{e^{-i\boldsymbol{\omega} t} (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) e^{-i\boldsymbol{\omega} t} (-\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) }\right\rangle \\ &= \frac{\epsilon_0}{2}\left\langle  (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) (\mathbf{E}_\mathbf{k} - i c \mathbf{B}_\mathbf{k})  \right\rangle \\ &= \frac{\epsilon_0}{2} \left( (\mathbf{E}_k)^2 + c^2 (\mathbf{B}_\mathbf{k})^2\right) + {c \epsilon_0} \left\langle  i \mathbf{E}_\mathbf{k} \wedge \mathbf{B}_\mathbf{k}  \right\rangle \\ &= \frac{\epsilon_0}{2} \left( (\mathbf{E}_k)^2 + c^2 (\mathbf{B}_\mathbf{k})^2\right) + {c \epsilon_0} \left\langle  \mathbf{B}_\mathbf{k} \times \mathbf{E}_\mathbf{k}  \right\rangle \\  \end{aligned}

Quite anticlimactically we have for the energy the sum of the energies associated with the parametrization constants, lending some justification for the initial choice to label these as electric and magnetic fields

\begin{aligned}U = \frac{\epsilon_0}{2} \left( (\mathbf{E}_k)^2 + c^2 (\mathbf{B}_\mathbf{k})^2\right) \end{aligned}

For the momentum, we want the difference of F F^\dagger, and F^\dagger F

\begin{aligned}F F^\dagger &= e^{-i\boldsymbol{\omega} t} e^{i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) (\mathbf{E}_\mathbf{k} - i c \mathbf{B}_\mathbf{k}) e^{-i \mathbf{k} \cdot \mathbf{x}} e^{i\boldsymbol{\omega} t}  \\ &= (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) (\mathbf{E}_\mathbf{k} - i c \mathbf{B}_\mathbf{k}) \\ &= (\mathbf{E}_\mathbf{k})^2 + c^2 (\mathbf{B}_\mathbf{k})^2 - 2 c \mathbf{B}_\mathbf{k} \times \mathbf{E}_\mathbf{k} \end{aligned}

\begin{aligned}F F^\dagger &= (\mathbf{E}_\mathbf{k} - i c \mathbf{B}_\mathbf{k}) e^{-i \mathbf{k} \cdot \mathbf{x}} e^{i\boldsymbol{\omega} t}  e^{-i\boldsymbol{\omega} t} e^{i \mathbf{k} \cdot \mathbf{x}} (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k})  \\ &= (\mathbf{E}_\mathbf{k} - i c \mathbf{B}_\mathbf{k}) (\mathbf{E}_\mathbf{k} + i c \mathbf{B}_\mathbf{k}) \\ &= (\mathbf{E}_\mathbf{k})^2 + c^2 (\mathbf{B}_\mathbf{k})^2 + 2 c \mathbf{B}_\mathbf{k} \times \mathbf{E}_\mathbf{k} \end{aligned}

So we have for the momentum, also anticlimactically

\begin{aligned}\mathbf{P} = \frac{1}{c} T(\gamma_0) \wedge \gamma_0 = \epsilon_0 \mathbf{E}_\mathbf{k} \times \mathbf{B}_\mathbf{k}  \end{aligned}


Well, that’s enough for one day. Understanding how to express circular and eliptic polarization is one of the logical next steps. I seem to recall from Susskind’s QM lectures that these can be considered superpositions of linearly polarized waves, so examining a sum of two co-directionally propagating fields would seem to be in order. Also there ought to be a more natural way to express the perpendicularity requirement for the field and the propagation direction. The fact that the field components and propagation direction when all multiplied is proportional to the spatial pseudoscalar can probably be utilized to tidy this up and also produce a form that allows for simpler summation of fields in different propagation directions. It also seems reasonable to consider a planar Fourier decomposition of the field components, perhaps framing the superposition of multiple fields in that context.

Appendix. Background details.

Conjugate split

The Hermitian conjugate is defined as

\begin{aligned}A^\dagger = \gamma_0 \tilde{A} \gamma_0 \end{aligned}

The conjugate action on a multivector product is straightforward to calculate

\begin{aligned}(A B)^\dagger&= \gamma_0 (A B)^{\tilde{}} \gamma_0 \\ &= \gamma_0 \tilde{B} \tilde{A} \gamma_0 \\ &= \gamma_0 \tilde{B} {\gamma_0}^2 \tilde{A} \gamma_0 \\ &= B^\dagger A^\dagger \end{aligned}

For a spatial vector Hermitian conjugation leaves the vector unaltered

\begin{aligned}\mathbf{a}&= \gamma_0 (\gamma_k \gamma_0)^{\tilde{}} a^k \gamma_0 \\ &= \gamma_0 (\gamma_0 \gamma_k) a^k \gamma_0 \\ &= \gamma_k a^k \gamma_0 \\ &= \mathbf{a} \end{aligned}

But the pseudoscalar is negated

\begin{aligned}i^\dagger&=\gamma_0 \tilde{i} \gamma_0 \\ &=\gamma_0 i \gamma_0 \\ &=-\gamma_0 \gamma_0 i \\ &=- i \\  \end{aligned}

This allows for a split by conjugation of the field into its electric and magnetic field components.

\begin{aligned}F^\dagger&= -\gamma_0 ( \mathbf{E} + i c \mathbf{B}) \gamma_0 \\ &= -\gamma_0^2 ( -\mathbf{E} + i c \mathbf{B}) \\ &= \mathbf{E} - i c\mathbf{B} \\  \end{aligned}

So we have

\begin{aligned}\mathbf{E} &= \frac{1}{2}(F + F^\dagger) \\ c \mathbf{B} &= \frac{1}{2i}(F - F^\dagger) \end{aligned} \quad\quad\quad(25)

Field Energy Momentum density four vector.

In the GA formalism the energy momentum tensor is

\begin{aligned}T(a) = \frac{\epsilon_0}{2} F a \tilde{F} \end{aligned}

It is not necessarily obvious this bivector-vector-bivector product construction is even a vector quantity. Expansion of T(\gamma_0) in terms of the electric and magnetic fields demonstrates this vectorial nature.

\begin{aligned}F \gamma_0 \tilde{F}&=-(\mathbf{E} + i c \mathbf{B}) \gamma_0 (\mathbf{E} + i c \mathbf{B}) \\ &=-\gamma_0 (-\mathbf{E} + i c \mathbf{B}) (\mathbf{E} + i c \mathbf{B}) \\ &=-\gamma_0 (-\mathbf{E}^2 - c^2 \mathbf{B}^2 + i c (\mathbf{B} \mathbf{E} - \mathbf{E} \mathbf{B}) ) \\ &=\gamma_0 (\mathbf{E}^2 + c^2 \mathbf{B}^2) - 2 \gamma_0 i c (\mathbf{B} \wedge \mathbf{E}) ) \\ &=\gamma_0 (\mathbf{E}^2 + c^2 \mathbf{B}^2) + 2 \gamma_0 c (\mathbf{B} \times \mathbf{E}) \\ &=\gamma_0 (\mathbf{E}^2 + c^2 \mathbf{B}^2) + 2 \gamma_0 c \gamma_k \gamma_0 (\mathbf{B} \times \mathbf{E})^k \\ &=\gamma_0 (\mathbf{E}^2 + c^2 \mathbf{B}^2) + 2 \gamma_k (\mathbf{E} \times (c \mathbf{B}))^k \\  \end{aligned}

Therefore, T(\gamma_0), the energy momentum tensor biased towards a particular observer frame \gamma_0

\begin{aligned}T(\gamma_0)&=\gamma_0 \frac{\epsilon_0}{2} (\mathbf{E}^2 + c^2 \mathbf{B}^2) + \gamma_k \epsilon_0 (\mathbf{E} \times (c \mathbf{B}))^k \end{aligned} \quad\quad\quad(28)

Recognizable here in the components T(\gamma_0) are the field energy density and momentum density. In particular the energy density can be obtained by dotting with \gamma_0, whereas the (spatial vector) momentum by wedging with \gamma_0.

These are

\begin{aligned}U \equiv T(\gamma_0) \cdot \gamma_0 &= \frac{1}{2} \left( \epsilon_0 \mathbf{E}^2 + \frac{1}{\mu_0} \mathbf{B}^2 \right) \\ c \mathbf{P} \equiv T(\gamma_0) \wedge \gamma_0 &= \frac{1}{\mu_0} \mathbf{E} \times \mathbf{B} \end{aligned}

In terms of the combined field these are

\begin{aligned}U &= \frac{-\epsilon_0}{2}( F \gamma_0 F \gamma_0 + \gamma_0 F \gamma_0 F) \\ c \mathbf{P} &= \frac{-\epsilon_0}{2}( F \gamma_0 F \gamma_0 - \gamma_0 F \gamma_0 F) \end{aligned}

Summarizing with the Hermitian conjugate

\begin{aligned}U &= \frac{\epsilon_0}{2}( F F^\dagger + F^\dagger F) \\ c \mathbf{P} &= \frac{\epsilon_0}{2}( F F^\dagger - F^\dagger F) \end{aligned}


Calculation of the divergence produces the components of the Lorentz force densities

\begin{aligned}\nabla \cdot T(a)&= \frac{\epsilon_0}{2} {\left\langle{{ \nabla (F a F) }}\right\rangle} \\ &= \frac{\epsilon_0}{2} {\left\langle{{ (\nabla F) a F + (F \nabla) F a }}\right\rangle} \\  \end{aligned}

Here the gradient is used implicitly in bidirectional form, where the direction is implied by context. From Maxwell’s equation we have

\begin{aligned}J/\epsilon_0 c&= (\nabla F)^{\tilde{}} \\ &= (\tilde{F} \tilde{\nabla}) \\ &= -(F \nabla) \end{aligned}

and continuing the expansion

\begin{aligned}\nabla \cdot T(a)&= \frac{1}{2c} {\left\langle{{ J a F - J F a }}\right\rangle} \\ &= \frac{1}{2c} {\left\langle{{ F J a - J F a }}\right\rangle} \\ &= \frac{1}{2c} {\left\langle{{ (F J - J F) a }}\right\rangle} \\  \end{aligned}

Wrapping up, the divergence and the adjoint of the energy momentum tensor are

\begin{aligned}\nabla \cdot T(a) &= \frac{1}{c} (F \cdot J) \cdot a \\ \bar{T}(\nabla) &= F \cdot J/c \end{aligned}

When integrated over a volume, the quantities F \cdot J/c are the components of the RHS of the Lorentz force equation \dot{p} = q F \cdot v/c.


[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

Posted in Math and Physics Learning. | Tagged: , , , , | Leave a Comment »